Search results for: Fast Decoupled method
6220 Prediction of Natural Gas Viscosity using Artificial Neural Network Approach
Authors: E. Nemati Lay, M. Peymani, E. Sanjari
Abstract:
Prediction of viscosity of natural gas is an important parameter in the energy industries such as natural gas storage and transportation. In this study viscosity of different compositions of natural gas is modeled by using an artificial neural network (ANN) based on back-propagation method. A reliable database including more than 3841 experimental data of viscosity for testing and training of ANN is used. The designed neural network can predict the natural gas viscosity using pseudo-reduced pressure and pseudo-reduced temperature with AARD% of 0.221. The accuracy of designed ANN has been compared to other published empirical models. The comparison indicates that the proposed method can provide accurate results.
Keywords: Artificial neural network, Empirical correlation, Natural gas, Viscosity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32456219 Short-Term Load Forecasting Based on Variational Mode Decomposition and Least Square Support Vector Machine
Authors: Jiangyong Liu, Xiangxiang Xu, Bote Luo, Xiaoxue Luo, Jiang Zhu, Lingzhi Yi
Abstract:
To address the problems of non-linearity and high randomness of the original power load sequence causing the degradation of power load forecasting accuracy, a short-term load forecasting method is proposed. The method is based on the least square support vector machine (LSSVM) optimized by an improved sparrow search algorithm combined with the variational mode decomposition proposed in this paper. The application of the variational mode decomposition technique decomposes the raw power load data into a series of intrinsic mode functions components, which can reduce the complexity and instability of the raw data while overcoming modal confounding; the proposed improved sparrow search algorithm can solve the problem of difficult selection of learning parameters in the LSSVM. Finally, through comparison experiments, the results show that the method can effectively improve prediction accuracy.
Keywords: Load forecasting, variational mode decomposition, improved sparrow search algorithm, least square support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506218 Aircraft Selection Process Using Preference Analysis for Reference Ideal Solution (PARIS)
Authors: C. Ardil
Abstract:
Multiple criteria decision making analysis (MCDMA) methods are applied to many real - life problems in different fields of engineering science and technology. The "preference analysis for reference ideal solution (PARIS)" method is proposed for an efficient MCDMA evaluation of decision problems. The multiple criteria aircraft evaluation approach is based on the integrated the mean weight, entropy weight, PARIS, and TOPSIS method, which eliminates the subjective importance weight assignment process. The evaluation criteria were identified from an extensive literature review of aircraft selection process. The aim of this study is to propose an efficient methodology for handling the aircraft selection process in which the proposed method solves effectively the MCDMA problem. A numerical example is presented to demonstrate the applicability and validity of the proposed MCDMA approach.
Keywords: aircraft selection, aircraft, multiple criteria decision making, multiple criteria decision making analysis, mean weight, entropy weight, MCDMA, PARIS, TOPSIS, VIKOR, ELECTRE, PROMETHEE
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5356217 Learning FCM by Tabu Search
Authors: Somayeh Alizadeh, Mehdi Ghazanfari, Mostafa Jafari, Salman Hooshmand
Abstract:
Fuzzy Cognitive Maps (FCMs) is a causal graph, which shows the relations between essential components in complex systems. Experts who are familiar with the system components and their relations can generate a related FCM. There is a big gap when human experts cannot produce FCM or even there is no expert to produce the related FCM. Therefore, a new mechanism must be used to bridge this gap. In this paper, a novel learning method is proposed to construct causal graph based on historical data and by using metaheuristic such Tabu Search (TS). The efficiency of the proposed method is shown via comparison of its results of some numerical examples with those of some other methods.
Keywords: Fuzzy Cognitive Map (FCM), Learning, Meta heuristic, Genetic Algorithm, Tabu search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18646216 Freighter Aircraft Selection Using Entropic Programming for Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
This paper proposes entropic programming for the freighter aircraft selection problem using the multiple criteria decision analysis method. The study aims to propose a systematic and comprehensive framework by focusing on the perspective of freighter aircraft selection. In order to achieve this goal, an integrated entropic programming approach was proposed to evaluate and rank alternatives. The decision criteria and aircraft alternatives were identified from the research data analysis. The objective criteria weights were determined by the mean weight method and the standard deviation method. The proposed entropic programming model was applied to a practical decision problem for evaluating and selecting freighter aircraft. The proposed entropic programming technique gives robust, reliable, and efficient results in modeling decision making analysis problems. As a result of entropic programming analysis, Boeing B747-8F, a freighter aircraft alternative ( a3), was chosen as the most suitable freighter aircraft candidate.
Keywords: entropic programming, additive weighted model, multiple criteria decision making analysis, MCDMA, TOPSIS, aircraft selection, freighter aircraft, Boeing B747-8F, Boeing B777F, Airbus A350F
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5496215 3D Dense Correspondence for 3D Dense Morphable Face Shape Model
Authors: Tae in Seol, Sun-Tae Chung, Seongwon Cho
Abstract:
Realistic 3D face model is desired in various applications such as face recognition, games, avatars, animations, and etc. Construction of 3D face model is composed of 1) building a face shape model and 2) rendering the face shape model. Thus, building a realistic 3D face shape model is an essential step for realistic 3D face model. Recently, 3D morphable model is successfully introduced to deal with the various human face shapes. 3D dense correspondence problem should be precedently resolved for constructing a realistic 3D dense morphable face shape model. Several approaches to 3D dense correspondence problem in 3D face modeling have been proposed previously, and among them optical flow based algorithms and TPS (Thin Plate Spline) based algorithms are representative. Optical flow based algorithms require texture information of faces, which is sensitive to variation of illumination. In TPS based algorithms proposed so far, TPS process is performed on the 2D projection representation in cylindrical coordinates of the 3D face data, not directly on the 3D face data and thus errors due to distortion in data during 2D TPS process may be inevitable. In this paper, we propose a new 3D dense correspondence algorithm for 3D dense morphable face shape modeling. The proposed algorithm does not need texture information and applies TPS directly on 3D face data. Through construction procedures, it is observed that the proposed algorithm constructs realistic 3D face morphable model reliably and fast.Keywords: 3D Dense Correspondence, 3D Morphable Face Shape Model, 3D Face Modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21876214 Automatic Generation of Ontology from Data Source Directed by Meta Models
Authors: Widad Jakjoud, Mohamed Bahaj, Jamal Bakkas
Abstract:
Through this paper we present a method for automatic generation of ontological model from any data source using Model Driven Architecture (MDA), this generation is dedicated to the cooperation of the knowledge engineering and software engineering. Indeed, reverse engineering of a data source generates a software model (schema of data) that will undergo transformations to generate the ontological model. This method uses the meta-models to validate software and ontological models.
Keywords: Meta model, model, ontology, data source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19986213 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector
Authors: H. Aldousari, T. Buchacher, N. M. Spyrou
Abstract:
Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.
Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18916212 Moving Vehicles Detection Using Automatic Background Extraction
Authors: Saad M. Al-Garni, Adel A. Abdennour
Abstract:
Vehicle detection is the critical step for highway monitoring. In this paper we propose background subtraction and edge detection technique for vehicle detection. This technique uses the advantages of both approaches. The practical applications approved the effectiveness of this method. This method consists of two procedures: First, automatic background extraction procedure, in which the background is extracted automatically from the successive frames; Second vehicles detection procedure, which depend on edge detection and background subtraction. Experimental results show the effective application of this algorithm. Vehicles detection rate was higher than 91%.
Keywords: Image processing, Automatic background extraction, Moving vehicle detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24246211 A Diagnostic Fuzzy Rule-Based System for Congenital Heart Disease
Authors: Ersin Kaya, Bulent Oran, Ahmet Arslan
Abstract:
In this study, fuzzy rule-based classifier is used for the diagnosis of congenital heart disease. Congenital heart diseases are defined as structural or functional heart disease. Medical data sets were obtained from Pediatric Cardiology Department at Selcuk University, from years 2000 to 2003. Firstly, fuzzy rules were generated by using medical data. Then the weights of fuzzy rules were calculated. Two different reasoning methods as “weighted vote method" and “singles winner method" were used in this study. The results of fuzzy classifiers were compared.Keywords: Congenital heart disease, Fuzzy rule-basedclassifiers, Classification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18216210 Pruning Method of Belief Decision Trees
Authors: Salsabil Trabelsi, Zied Elouedi, Khaled Mellouli
Abstract:
The belief decision tree (BDT) approach is a decision tree in an uncertain environment where the uncertainty is represented through the Transferable Belief Model (TBM), one interpretation of the belief function theory. The uncertainty can appear either in the actual class of training objects or attribute values of objects to classify. In this paper, we develop a post-pruning method of belief decision trees in order to reduce size and improve classification accuracy on unseen cases. The pruning of decision tree has a considerable intention in the areas of machine learning.Keywords: machine learning, uncertainty, belief function theory, belief decision tree, pruning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19106209 Analysis of the Shielding Effectiveness of Several Magnetic Shields
Authors: Diako Azizi, Hosein Heydari, Ahmad Gholami
Abstract:
Today with the rapid growth of telecommunications equipment, electronic and developing more and more networks of power, influence of electromagnetic waves on one another has become hot topic discussions. So in this article, this issue and appropriate mechanisms for EMC operations have been presented. First, a source of alternating current (50 Hz) and a clear victim in a certain distance from the source is placed. With this simple model, the effects of electromagnetic radiation from the source to the victim will be investigated and several methods to reduce these effects have been presented. Therefore passive and active shields have been used. In some steps, shielding effectiveness of proposed shields will be compared. . It should be noted that simulations have been done by the finite element method (FEM).
Keywords: Electrical field, field distribution, finite element method, shielding effectiveness
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17526208 Preparation and Evaluation of New Nanocatalysts for Selective Oxidation of H2S to Sulfur
Authors: Mohammad Rezaee, Mohammad Kazemeini, Ali Morad Rashidi, Moslem Fattahi
Abstract:
Selective oxidation of H2S to elemental sulfur in a fixed bed reactor over newly synthesized alumina nanocatalysts was physio-chemically investigated and results compared with a commercial Claus catalyst. Amongst these new materials, Al2O3- supported sodium oxide prepared with wet chemical technique and Al2O3 nanocatalyst prepared with spray pyrolysis method were the most active catalysts for selective oxidation of H2S to elemental sulfur. Other prepared nanocatalysts were quickly deactivated, mainly due to the interaction with H2S and conversion into sulfides.Keywords: H2S, Claus process, Al2O3, Spray pyrolysis method, Wet chemical technique
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25136207 The Wavelet-Based DFT: A New Interpretation, Extensions and Applications
Authors: Abdulnasir Hossen, Ulrich Heute
Abstract:
In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Keywords: Image Transform, Spectral Analysis, Sub-Band DFT, Wavelet DFT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16696206 Tritium Determination in Danube River Water in Serbia by Liquid Scintillation Counter
Authors: S. Forkapic, J. Nikolov, N. Todorovic, D. Mrdja, I. Bikit
Abstract:
Tritium activity concentration in Danube river water in Serbia has been determinate using a liquid scintillation counter Quantulus 1220. During December 2010, water samples were taken along the entire course of Danube through Serbia, from Hungarian- Serbian to Romanian-Serbian border. This investigation is very important because of the nearness of nuclear reactor Paks in Hungary. Sample preparation was performed by standard test method using Optiphase HiSafe 3 scintillation cocktail. We used a rapid method for the preparation of environmental samples, without electrolytic enrichment.Keywords: detection limit, liquid scintillation counter, low-leveltritium analysis, monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21516205 Time-Dependent Behavior of Damaged Reinforced Concrete Shear Walls Strengthened with Composite Plates Having Variable Fibers Spacing
Authors: R. Yeghnem, L. Boulefrakh, S. A. Meftah, A. Tounsi, E. A. Adda Bedia
Abstract:
In this study, the time-dependent behavior of damaged reinforced concrete shear wall structures strengthened with composite plates having variable fibers spacing was investigated to analyze their seismic response. In the analytical formulation, the adherent and the adhesive layers are all modeled as shear walls, using the mixed Finite Element Method (FEM). The anisotropic damage model is adopted to describe the damage extent of the Reinforced Concrete shear walls. The phenomenon of creep and shrinkage of concrete has been determined by Eurocode 2. Large earthquakes recorded in Algeria (El-Asnam and Boumerdes) have been tested to demonstrate the accuracy of the proposed method. Numerical results are obtained for non-uniform distributions of carbon fibers in epoxy matrices. The effects of damage extent and the delay mechanism creep and shrinkage of concrete are highlighted. Prospects are being studied.Keywords: RC shear wall structures, composite plates, creep and shrinkage, damaged reinforced concrete structures, finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16726204 A New Method of Adaptation in Integrated Learning Environment
Authors: Ildar Galeev, Renat Mustaphin, C. Ardil
Abstract:
A new method of adaptation in a partially integrated learning environment that includes electronic textbook (ET) and integrated tutoring system (ITS) is described. The algorithm of adaptation is described in detail. It includes: establishment of Interconnections of operations and concepts; estimate of the concept mastering level (for all concepts); estimate of student-s non-mastering level on the current learning step of information on each page of ET; creation of a rank-order list of links to the e-manual pages containing information that require repeated work.
Keywords: Adaptation, Integrated Learning Environment, Integrated Tutoring System, Electronic Textbook.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14686203 Study of Characteristics of Multi-Layer Piezoelectric Transformers by using 3-D Finite Element Method
Authors: C. Panya-Isara, T. Kulworawanichpong, P. Pao-La-Or
Abstract:
Piezoelectric transformers are electronic devices made from piezoelectric materials. The piezoelectric transformers as the name implied are used for changing voltage signals from one level to another. Electrical energy carried with signals is transferred by means of mechanical vibration. Characterizing in both electrical and mechanical properties leads to extensively use and efficiency enhancement of piezoelectric transformers in various applications. In this paper, study and analysis of electrical and mechanical properties of multi-layer piezoelectric transformers in forms of potential and displacement distribution throughout the volume, respectively. This paper proposes a set of quasi-static mathematical model of electromechanical coupling for piezoelectric transformer by using a set of partial differential equations. Computer-based simulation utilizing the three-dimensional finite element method (3-D FEM) is exploited as a tool for visualizing potentials and displacements distribution within the multi-layer piezoelectric transformer. This simulation was conducted by varying a number of layers. In this paper 3, 5 and 7 of the circular ring type were used. The computer simulation based on the use of the FEM has been developed in MATLAB programming environment.Keywords: Multi-layer Piezoelectric Transformer, 3-D Finite Element Method (3-D FEM), Electro-mechanical Coupling, Mechanical Vibration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16446202 Design of Variable Fractional-Delay FIR Differentiators
Authors: Jong-Jy Shyu, Soo-Chang Pei, Min-Han Chang
Abstract:
In this paper, the least-squares design of variable fractional-delay (VFD) finite impulse response (FIR) digital differentiators is proposed. The used transfer function is formulated so that Farrow structure can be applied to realize the designed system. Also, the symmetric characteristics of filter coefficients are derived, which leads to the complexity reduction by saving almost a half of the number of coefficients. Moreover, all the elements of related vectors or matrices for the optimal process can be represented in closed forms, which make the design easier. Design example is also presented to illustrate the effectiveness of the proposed method.
Keywords: Differentiator, variable fractional-delay filter, FIR filter, least-squares method, Farrow structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14156201 Investigation of the Effect of Milling Time on the Mechanochemical Synthesis of Fe3Al/ Al2O3 Nanocomposite
Authors: B. Ghasemi, A. A. Najafzadeh Khoee
Abstract:
In this study, the effect of mechanical activation on the synthesis of Fe3Al/Al2O3 nanocomposite has been investigated by using mechanochemical method. For this purpose, Aluminum powder and hematite as precursors, with stoichiometric ratio, have been utilized and other effective parameters in milling process were kept constant. Phase formation analysis, crystallite size measurement and lattice strain were studied by X-ray diffraction (XRD) by using Williamson-Hall method as well as microstructure and morphology were explored by Scanning electron microscopy (SEM). Also, Energy-dispersive X-ray spectroscopy (EDX) analysis was used in order to probe the particle distribution. The results showed that after 30-hour milling, the reaction was started, combustibly done and completed.
Keywords: hematite, mechanochemical, milling, nanocomposite
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19056200 Constructing Approximate and Exact Solutions for Boussinesq Equations using Homotopy Perturbation Padé Technique
Authors: Mohamed M. Mousa, Aidarkhan Kaltayev
Abstract:
Based on the homotopy perturbation method (HPM) and Padé approximants (PA), approximate and exact solutions are obtained for cubic Boussinesq and modified Boussinesq equations. The obtained solutions contain solitary waves, rational solutions. HPM is used for analytic treatment to those equations and PA for increasing the convergence region of the HPM analytical solution. The results reveal that the HPM with the enhancement of PA is a very effective, convenient and quite accurate to such types of partial differential equations.Keywords: Homotopy perturbation method, Padé approximants, cubic Boussinesq equation, modified Boussinesq equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45776199 Design and Application of NFC-Based Identity and Access Management in Cloud Services
Authors: Shin-Jer Yang, Kai-Tai Yang
Abstract:
In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is becoming more important especially with Software as a Service (SaaS). This in turn leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggest that the NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in mobile device are to support IAM features and also offers users a more user-friendly experience and stronger security protection. In the future, our NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.
Keywords: Cloud service, multi-tenancy, NFC, IAM, mobile device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11186198 Wasting Human and Computer Resources
Authors: Mária Csernoch, Piroska Biró
Abstract:
The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.
Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19116197 Lattice Boltzmann Simulation of the Carbonization of Wood Particle
Authors: Ahmed Mahmoudi, Imen Mejri, Mohamed A. Abbassi, Ahmed Omri
Abstract:
A numerical study based on the Lattice Boltzmann Method (LBM) is proposed to solve one, two and three dimensional heat and mass transfer for isothermal carbonization of thick wood particles. To check the validity of the proposed model, computational results have been compared with the published data and a good agreement is obtained. Then, the model is used to study the effect of reactor temperature and thermal boundary conditions, on the evolution of the local temperature and the mass distributions of the wood particle during carbonization
Keywords: Lattice Boltzmann Method, pyrolysis conduction, carbonization, Heat and mass transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27076196 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.
Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4026195 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery
Authors: M. Hamad Hassan, S.A.M. Gilani
Abstract:
In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.
Keywords: Hash Collision, LSB, MD5, PSNR, SHA160
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15196194 Enhancing Cache Performance Based on Improved Average Access Time
Authors: Jasim. A. Ghaeb
Abstract:
A high performance computer includes a fast processor and millions bytes of memory. During the data processing, huge amount of information are shuffled between the memory and processor. Because of its small size and its effectiveness speed, cache has become a common feature of high performance computers. Enhancing cache performance proved to be essential in the speed up of cache-based computers. Most enhancement approaches can be classified as either software based or hardware controlled. The performance of the cache is quantified in terms of hit ratio or miss ratio. In this paper, we are optimizing the cache performance based on enhancing the cache hit ratio. The optimum cache performance is obtained by focusing on the cache hardware modification in the way to make a quick rejection to the missed line's tags from the hit-or miss comparison stage, and thus a low hit time for the wanted line in the cache is achieved. In the proposed technique which we called Even- Odd Tabulation (EOT), the cache lines come from the main memory into cache are classified in two types; even line's tags and odd line's tags depending on their Least Significant Bit (LSB). This division is exploited by EOT technique to reject the miss match line's tags in very low time compared to the time spent by the main comparator in the cache, giving an optimum hitting time for the wanted cache line. The high performance of EOT technique against the familiar mapping technique FAM is shown in the simulated results.Keywords: Caches, Cache performance, Hit time, Cache hit ratio, Cache mapping, Cache memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16786193 Solving Fuzzy Multi-Objective Linear Programming Problems with Fuzzy Decision Variables
Authors: Mahnaz Hosseinzadeh, Aliyeh Kazemi
Abstract:
In this paper, a method is proposed for solving Fuzzy Multi-Objective Linear Programming problems (FMOLPP) with fuzzy right hand side and fuzzy decision variables. To illustrate the proposed method, it is applied to the problem of selecting suppliers for an automotive parts producer company in Iran in order to find the number of optimal orders allocated to each supplier considering the conflicting objectives. Finally, the obtained results are discussed.Keywords: Fuzzy multi-objective linear programming problems, triangular fuzzy numbers, fuzzy ranking, supplier selection problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14146192 Quality-Controlled Compression Method using Wavelet Transform for Electrocardiogram Signals
Authors: Redha Benzid, Farid Marir, Nour-Eddine Bouguechal
Abstract:
This paper presents a new Quality-Controlled, wavelet based, compression method for electrocardiogram (ECG) signals. Initially, an ECG signal is decomposed using the wavelet transform. Then, the resulting coefficients are iteratively thresholded to guarantee that a predefined goal percent root mean square difference (GPRD) is matched within tolerable boundaries. The quantization strategy of extracted non-zero wavelet coefficients (NZWC), according to the combination of RLE, HUFFMAN and arithmetic encoding of the NZWC and a resulting look up table, allow the accomplishment of high compression ratios with good quality reconstructed signals.
Keywords: ECG compression, Non-uniform Max-Lloydquantizer, PRD, Quality-Controlled, Wavelet transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17476191 Simulation of the Finite Difference Time Domain in Two Dimension
Abstract:
The finite-difference time-domain (FDTD) method is one of the most widely used computational methods in electromagnetic. This paper describes the design of two-dimensional (2D) FDTD simulation software for transverse magnetic (TM) polarization using Berenger's split-field perfectly matched layer (PML) formulation. The software is developed using Matlab programming language. Numerical examples validate the software.Keywords: Finite difference time domain (FDTD) method, perfectly matched layer (PML), split-filed formulation, transverse magnetic (TM) polarization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5620