Search results for: Polynomial approximate inverse
59 Supersonic Flow around a Dihedral Airfoil: Modeling and Experimentation Investigation
Authors: A. Naamane, M. Hasnaoui
Abstract:
Numerical modeling of fluid flows, whether compressible or incompressible, laminar or turbulent presents a considerable contribution in the scientific and industrial fields. However, the development of an approximate model of a supersonic flow requires the introduction of specific and more precise techniques and methods. For this purpose, the object of this paper is modeling a supersonic flow of inviscid fluid around a dihedral airfoil. Based on the thin airfoils theory and the non-dimensional stationary Steichen equation of a two-dimensional supersonic flow in isentropic evolution, we obtained a solution for the downstream velocity potential of the oblique shock at the second order of relative thickness that characterizes a perturbation parameter. This result has been dealt with by the asymptotic analysis and characteristics method. In order to validate our model, the results are discussed in comparison with theoretical and experimental results. Indeed, firstly, the comparison of the results of our model has shown that they are quantitatively acceptable compared to the existing theoretical results. Finally, an experimental study was conducted using the AF300 supersonic wind tunnel. In this experiment, we have considered the incident upstream Mach number over a symmetrical dihedral airfoil wing. The comparison of the different Mach number downstream results of our model with those of the existing theoretical data (relative margin between 0.07% and 4%) and with experimental results (concordance for a deflection angle between 1° and 11°) support the validation of our model with accuracy.
Keywords: Asymptotic modelling, dihedral airfoil, supersonic flow, supersonic wind tunnel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71458 Development of a Neural Network based Algorithm for Multi-Scale Roughness Parameters and Soil Moisture Retrieval
Authors: L. Bennaceur Farah, I. R. Farah, R. Bennaceur, Z. Belhadj, M. R. Boussema
Abstract:
The overall objective of this paper is to retrieve soil surfaces parameters namely, roughness and soil moisture related to the dielectric constant by inverting the radar backscattered signal from natural soil surfaces. Because the classical description of roughness using statistical parameters like the correlation length doesn't lead to satisfactory results to predict radar backscattering, we used a multi-scale roughness description using the wavelet transform and the Mallat algorithm. In this description, the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each having a spatial scale. A second step in this study consisted in adapting a direct model simulating radar backscattering namely the small perturbation model to this multi-scale surface description. We investigated the impact of this description on radar backscattering through a sensitivity analysis of backscattering coefficient to the multi-scale roughness parameters. To perform the inversion of the small perturbation multi-scale scattering model (MLS SPM) we used a multi-layer neural network architecture trained by backpropagation learning rule. The inversion leads to satisfactory results with a relative uncertainty of 8%.Keywords: Remote sensing, rough surfaces, inverse problems, SAR, radar scattering, Neural networks and Fractals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159457 The Study of the Intelligent Fuzzy Weighted Input Estimation Method Combined with the Experiment Verification for the Multilayer Materials
Authors: Ming-Hui Lee, Tsung-Chien Chen, Tsu-Ping Yu, Horng-Yuan Jang
Abstract:
The innovative intelligent fuzzy weighted input estimation method (FWIEM) can be applied to the inverse heat transfer conduction problem (IHCP) to estimate the unknown time-varying heat flux of the multilayer materials as presented in this paper. The feasibility of this method can be verified by adopting the temperature measurement experiment. The experiment modular may be designed by using the copper sample which is stacked up 4 aluminum samples with different thicknesses. Furthermore, the bottoms of copper samples are heated by applying the standard heat source, and the temperatures on the tops of aluminum are measured by using the thermocouples. The temperature measurements are then regarded as the inputs into the presented method to estimate the heat flux in the bottoms of copper samples. The influence on the estimation caused by the temperature measurement of the sample with different thickness, the processing noise covariance Q, the weighting factor γ , the sampling time interval Δt , and the space discrete interval Δx , will be investigated by utilizing the experiment verification. The results show that this method is efficient and robust to estimate the unknown time-varying heat input of the multilayer materials.Keywords: Multilayer Materials, Input Estimation Method, IHCP, Heat Flux.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 123456 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials
Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic
Abstract:
The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.
Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96055 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis
Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov
Abstract:
The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.
Keywords: Acoustic model, direction of arrival, inverse source problem, sound localization, urban noises.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7454 Rapid Finite-Element Based Airport Pavement Moduli Solutions using Neural Networks
Authors: Kasthurirangan Gopalakrishnan, Marshall R. Thompson, Anshu Manik
Abstract:
This paper describes the use of artificial neural networks (ANN) for predicting non-linear layer moduli of flexible airfield pavements subjected to new generation aircraft (NGA) loading, based on the deflection profiles obtained from Heavy Weight Deflectometer (HWD) test data. The HWD test is one of the most widely used tests for routinely assessing the structural integrity of airport pavements in a non-destructive manner. The elastic moduli of the individual pavement layers backcalculated from the HWD deflection profiles are effective indicators of layer condition and are used for estimating the pavement remaining life. HWD tests were periodically conducted at the Federal Aviation Administration-s (FAA-s) National Airport Pavement Test Facility (NAPTF) to monitor the effect of Boeing 777 (B777) and Beoing 747 (B747) test gear trafficking on the structural condition of flexible pavement sections. In this study, a multi-layer, feed-forward network which uses an error-backpropagation algorithm was trained to approximate the HWD backcalculation function. The synthetic database generated using an advanced non-linear pavement finite-element program was used to train the ANN to overcome the limitations associated with conventional pavement moduli backcalculation. The changes in ANN-based backcalculated pavement moduli with trafficking were used to compare the relative severity effects of the aircraft landing gears on the NAPTF test pavements.Keywords: Airfield pavements, ANN, backcalculation, newgeneration aircraft
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 218353 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow
Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi
Abstract:
The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150852 Gaze Patterns of Skilled and Unskilled Sight Readers Focusing on the Cognitive Processes Involved in Reading Key and Time Signatures
Authors: J. F. Viljoen, Catherine Foxcroft
Abstract:
Expert sight readers rely on their ability to recognize patterns in scores, their inner hearing and prediction skills in order to perform complex sight reading exercises. They also have the ability to observe deviations from expected patterns in musical scores. This increases the “Eye-hand span” (reading ahead of the point of playing) in order to process the elements in the score. The study aims to investigate the gaze patterns of expert and non-expert sight readers focusing on key and time signatures. 20 musicians were tasked with playing 12 sight reading examples composed for one hand and five examples composed for two hands to be performed on a piano keyboard. These examples were composed in different keys and time signatures and included accidentals and changes of time signature to test this theory. Results showed that the experts fixate more and for longer on key and time signatures as well as deviations in examples for two hands than the non-expert group. The inverse was true for the examples for one hand, where expert sight readers showed fewer and shorter fixations on key and time signatures as well as deviations. This seems to suggest that experts focus more on the key and time signatures as well as deviations in complex scores to facilitate sight reading. The examples written for one appeared to be too easy for the expert sight readers, compromising gaze patterns.
Keywords: Cognition, eye tracking, musical notation, sight reading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60651 Influence of Calcium Intake Level to Osteoporptic Vertebral bone and Degenerated Disc in Biomechanical Study
Authors: Dae Gon Woo, Ji Hyung Park, Chi Hoon Kim, Tae Woo Lee, Beob Yi Lee, Han Sung Kim
Abstract:
The aim of the present study is to analyze the generation of osteoporotic vertebral bone induced by lack of calcium during growth period and analyze its effects for disc degeneration, based on biomechanical and histomorphometrical study. Mechanical and histomorphological characteristics of lumbar vertebral bones and discs of rats with calcium free diet (CFD) were detected and tracked by using high resolution in-vivo micro-computed tomography (in-vivo micro-CT), finite element (FE) and histological analysis. Twenty female Sprague-Dawley rats (6 weeks old, approximate weight 170g) were randomly divided into two groups (CFD group: 10, NOR group: 10). The CFD group was maintained on a refmed calcium-controlled semisynthetic diet without added calcium, to induce osteoporosis. All lumbar (L 1-L6) were scanned by using in vivo micro-CT with 35i.un resolution at 0, 4, 8 weeks to track the effects of CFD on the generation of osteoporosis. The fmdings of the present study indicated that calcium insufficiency was the main factor in the generation of osteoporosis and it induced lumbar vertebral disc degeneration. This study is a valuable experiment to firstly evaluate osteoporotic vertebral bone and disc degeneration induced by lack of calcium during growth period from a biomechanical and histomorphometrical point of view.
Keywords: Calcium free diet, Disc degeneration, Osteoporosis, in-vivo micro-CT, Finite element analysis, Histology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186250 Statistical Analysis and Optimization of a Process for CO2 Capture
Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi
Abstract:
CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.
Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184349 Evaluation of Mixed-Mode Stress Intensity Factor by Digital Image Correlation and Intelligent Hybrid Method
Authors: K. Machida, H. Yamada
Abstract:
Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Keywords: Digital image correlation, mixed mode, Newton-Raphson method, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170248 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File
Authors: Mohammed Erritali
Abstract:
The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.
Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173447 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes
Authors: V. Churkin, M. Lopatin
Abstract:
The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188246 Lower energy Gait Pattern Generation in 5-Link Biped Robot Using Image Processing
Authors: Byounghyun Kim, Youngjoon Han, Hernsoo Hahn
Abstract:
The purpose of this study is to find natural gait of biped robot such as human being by analyzing the COG (Center Of Gravity) trajectory of human being's gait. It is discovered that human beings gait naturally maintain the stability and use the minimum energy. This paper intends to find the natural gait pattern of biped robot using the minimum energy as well as maintaining the stability by analyzing the human's gait pattern that is measured from gait image on the sagittal plane and COG trajectory on the frontal plane. It is not possible to apply the torques of human's articulation to those of biped robot's because they have different degrees of freedom. Nonetheless, human and 5-link biped robots are similar in kinematics. For this, we generate gait pattern of the 5-link biped robot by using the GA algorithm of adaptation gait pattern which utilize the human's ZMP (Zero Moment Point) and torque of all articulation that are measured from human's gait pattern. The algorithm proposed creates biped robot's fluent gait pattern as that of human being's and to minimize energy consumption because the gait pattern of the 5-link biped robot model is modeled after consideration about the torque of human's each articulation on the sagittal plane and ZMP trajectory on the frontal plane. This paper demonstrate that the algorithm proposed is superior by evaluating 2 kinds of the 5-link biped robot applied to each gait patterns generated both in the general way using inverse kinematics and in the special way in which by considering visuality and efficiency.Keywords: 5-link biped robot, gait pattern, COG (Center OfGravity), ZMP (Zero Moment Point).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189245 Optimal Consume of NaOH in Starches Gelatinization for Froth Flotation
Authors: André C. Silva, Débora N. Sousa, Elenice M. S. Silva, Thales P. Fontes, Raphael S. Tomaz
Abstract:
Starches are widely used as depressant in froth flotation operations in Brazil due to their efficiency, increasing the selectivity in the inverse flotation of quartz depressing iron ore. Starches market have been growing and improving in recent years, leading to better products attending the requirements of the mineral industry. The major source of starch used for iron ore is corn starch, which needs to be gelatinized with sodium hydroxide (NaOH) prior to use. This stage has a direct impact on industrials costs, once the lowest consumption of NaOH in gelatinization provides better control of the pH in the froth flotation and reduces the amount of electrolytes present in the pulp. In order to evaluate the gelatinization degree of different starches and flour were subjected to the addiction of NaOH and temperature variation experiments. Samples of starch (corn, cassava, HIPIX 100, HIPIX 101 and HIPIX 102 commercialized by Ingredion) and flour (cassava and potato) were tested. The starch samples were characterized through Scanning Electronic Microscopy and the amylose content were determined through spectrometry, swelling and solubility tests. The gelatinization was carried out through titration with NaOH, keeping the solution temperature constant at 40 oC. At the end of the tests, the optimal amount of NaOH consumed to gelatinize the starch or flour from different botanical sources was established and a correlation between the content of amylopectin in the starch and the starch/NaOH ratio needed for its gelatinization.
Keywords: Froth flotation, gelatinization, sodium hydroxide, starches and flours.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192744 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 322643 An Embedded System for Artificial Intelligence Applications
Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou
Abstract:
Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.
Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 508742 A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse
Authors: Meng Fanchao, Zhan Dechen, Xu Xiaofei
Abstract:
Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.Keywords: Business component, business operation, business data type, specification matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 140841 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework
Authors: T. P. Athira, Gibin Chacko George
Abstract:
This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.
Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216240 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir
Authors: Ahmad Fahim Nasiry, Shigeo Honma
Abstract:
We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.Keywords: Numerical simulation, immiscible, finite difference, IADI, IADE, waterflooding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 108639 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141838 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.
Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191837 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling
Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee
Abstract:
We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172636 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem
Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.
Abstract:
Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169935 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI
Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova
Abstract:
The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.Keywords: Mechatronic systems, Matlab GUI, sensitivity, tolerance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205034 Novel Adaptive Channel Equalization Algorithms by Statistical Sampling
Authors: János Levendovszky, András Oláh
Abstract:
In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.
Keywords: Cellular Neural Network, channel equalization, communication over fading channels, multiuser communication, spectral efficiency, statistical sampling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151933 The Location of Park and Ride Facilities Using the Fuzzy Inference Model
Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas
Abstract:
The paper presents a method in which the expert knowledge is applied to fuzzy inference model. Even a less experienced person could benefit from the use of such a system, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed locations can also be verified in a short time. The proposed method is intended for testing of locations of car parks in a city. The paper shows selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analyses of existing objects are also shown in the paper and they are confronted with the opinions of the system users, with particular emphasis on unpopular locations. The results of the analyses are compared to expert analysis of the P&R facilities location that was outsourced by the city and the opinions about existing facilities users that were expressed on social networking sites. The obtained results are consistent with actual users’ feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.Keywords: Fuzzy logic inference, P&R facilities, P&R location.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 164332 Twin-Screw Extruder and Effective Parameters on the HDPE Extrusion Process
Authors: S. A. Razavi Alavi, M. Torabi Angaji, Z. Gholami
Abstract:
In the process of polyethylene extrusion polymer material similar to powder or granule is under compression, melting and transmission operation and on base of special form, extrudate has been produced. Twin-screw extruders are applicable in industries because of their high capacity. The powder mixing with chemical additives and melting with thermal and mechanical energy in three zones (feed, compression and metering zone) and because of gear pump and screw's pressure, converting to final product in latest plate. Extruders with twin-screw and short distance between screws are better than other types because of their high capacity and good thermal and mechanical stress. In this paper, process of polyethylene extrusion and various tapes of extruders are studied. It is necessary to have an exact control on process to producing high quality products with safe operation and optimum energy consumption. The granule size is depending on granulator motor speed. Results show at constant feed rate a decrease in granule size was found whit Increase in motor speed. Relationships between HDPE feed rate and speed of granulator motor, main motor and gear pump are calculated following as: x = HDPE feed flow rate, yM = Main motor speed yM = (-3.6076e-3) x^4+ (0.24597) x^3+ (-5.49003) x^2+ (64.22092) x+61.66786 (1) x = HDPE feed flow rate, yG = Gear pump speed yG = (-2.4996e-3) x^4+ (0.18018) x^3+ (-4.22794) x^2+ (48.45536) x+18.78880 (2) x = HDPE feed flow rate, y = Granulator motor speed 10th Degree Polynomial Fit: y = a+bx+cx^2+dx^3... (3) a = 1.2751, b = 282.4655, c = -165.2098, d = 48.3106, e = -8.18715, f = 0.84997 g = -0.056094, h = 0.002358, i = -6.11816e-5 j = 8.919726e-7, k = -5.59050e-9Keywords: Extrusion, Extruder, Granule, HDPE, Polymer, Twin-Screw extruder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 497731 Optimizing the Components of Grid-Independent Microgrids for Rural Electrification Utilizing Solar Panel and Supercapacitor
Authors: Astiaj Khoramshahi, Hossein Ahmadi Danesh Ashtiani, Ahmad Khoshgard, Hamidreza Damghani, Leila Damghani
Abstract:
Rural electrification rates are generally low in Iran and many parts of the world that lack sustainable renewable energy resources. Many homes are based on polluting solutions such as crude oil and diesel generators for lighting, heating, and charging electrical gadgets. Small-scale portable solar battery packs are accessible to the public; however, they have low capacity and are challenging to be distributed in developing countries. To design a battery-based microgrid power systems, the load profile is one of the key parameters. Additionally, the reliability of the system should be taken into account. A conventional microgrid system can be either AC or coupling DC. Both AC and DC microgrids have advantages and disadvantages depending on their application and can be either connected to the main grid or perform independently. This article proposes a tool for optimal sizing of microgrid-independent systems via respective analysis. To show such an analysis, the type of power generation, number of panels, battery capacity, microgrid size, and group of available consumers should be considered. Therefore, the optimization of different design scenarios is based on number of solar panels and super saving sources, ranges of the depth of discharges, to calculate size and estimate the overall cost. Generally, it is observed that there is an inverse relationship between the depth spectrum of discharge and the solar microgrid costs.
Keywords: Storage, super-storage, grid-independent, economic factors, microgrid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31430 PoPCoRN: A Power-Aware Periodic Surveillance Scheme in Convex Region using Wireless Mobile Sensor Networks
Authors: A. K. Prajapati
Abstract:
In this paper, the periodic surveillance scheme has been proposed for any convex region using mobile wireless sensor nodes. A sensor network typically consists of fixed number of sensor nodes which report the measurements of sensed data such as temperature, pressure, humidity, etc., of its immediate proximity (the area within its sensing range). For the purpose of sensing an area of interest, there are adequate number of fixed sensor nodes required to cover the entire region of interest. It implies that the number of fixed sensor nodes required to cover a given area will depend on the sensing range of the sensor as well as deployment strategies employed. It is assumed that the sensors to be mobile within the region of surveillance, can be mounted on moving bodies like robots or vehicle. Therefore, in our scheme, the surveillance time period determines the number of sensor nodes required to be deployed in the region of interest. The proposed scheme comprises of three algorithms namely: Hexagonalization, Clustering, and Scheduling, The first algorithm partitions the coverage area into fixed sized hexagons that approximate the sensing range (cell) of individual sensor node. The clustering algorithm groups the cells into clusters, each of which will be covered by a single sensor node. The later determines a schedule for each sensor to serve its respective cluster. Each sensor node traverses all the cells belonging to the cluster assigned to it by oscillating between the first and the last cell for the duration of its life time. Simulation results show that our scheme provides full coverage within a given period of time using few sensors with minimum movement, less power consumption, and relatively less infrastructure cost.Keywords: Sensor Network, Graph Theory, MSN, Communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463