Search results for: optimal approximation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1971

Search results for: optimal approximation.

1221 Statistical Wavelet Features, PCA, and SVM Based Approach for EEG Signals Classification

Authors: R. K. Chaurasiya, N. D. Londhe, S. Ghosh

Abstract:

The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the supportvectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.

Keywords: Discrete Wavelet Transform, Electroencephalogram, Pattern Recognition, Principal Component Analysis, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3090
1220 Applications of Rough Set Decompositions in Information Retrieval

Authors: Chen Wu, Xiaohua Hu

Abstract:

This paper proposes rough set models with three different level knowledge granules in incomplete information system under tolerance relation by similarity between objects according to their attribute values. Through introducing dominance relation on the discourse to decompose similarity classes into three subclasses: little better subclass, little worse subclass and vague subclass, it dismantles lower and upper approximations into three components. By using these components, retrieving information to find naturally hierarchical expansions to queries and constructing answers to elaborative queries can be effective. It illustrates the approach in applying rough set models in the design of information retrieval system to access different granular expanded documents. The proposed method enhances rough set model application in the flexibility of expansions and elaborative queries in information retrieval.

Keywords: Incomplete information system, Rough set model, tolerance relation, dominance relation, approximation, decomposition, elaborative query.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
1219 The Effects of Peristalsis on Dispersion of a Micropolar Fluid in the Presence of Magnetic Field

Authors: Habtu Alemayehu, G. Radhakrishnamacharya

Abstract:

The paper presents an analytical solution for dispersion of a solute in the peristaltic motion of a micropolar fluid in the presence of magnetic field and both homogeneous and heterogeneous chemical reactions. The average effective dispersion coefficient has been found using Taylor-s limiting condition under long wavelength approximation. The effects of various relevant parameters on the average coefficient of dispersion have been studied. The average effective dispersion coefficient increases with amplitude ratio, cross viscosity coefficient and heterogeneous chemical reaction rate parameter. But it decreases with magnetic field parameter and homogeneous chemical reaction rate parameter. It can be noted that the presence of peristalsis enhances dispersion of a solute.

Keywords: Peristalsis, Dispersion, Chemical reaction, Magneticfield, Micropolar fluid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694
1218 Material and Parameter Analysis of the PolyJet Process for Mold Making Using Design of Experiments

Authors: A. Kampker, K. Kreisköther, C. Reinders

Abstract:

Since additive manufacturing technologies constantly advance, the use of this technology in mold making seems reasonable. Many manufacturers of additive manufacturing machines, however, do not offer any suggestions on how to parameterize the machine to achieve optimal results for mold making. The purpose of this research is to determine the interdependencies of different materials and parameters within the PolyJet process by using design of experiments (DoE), to additively manufacture molds, e.g. for thermoforming and injection molding applications. Therefore, the general requirements of thermoforming molds, such as heat resistance, surface quality and hardness, have been identified. Then, different materials and parameters of the PolyJet process, such as the orientation of the printed part, the layer thickness, the printing mode (matte or glossy), the distance between printed parts and the scaling of parts, have been examined. The multifactorial analysis covers the following properties of the printed samples: Tensile strength, tensile modulus, bending strength, elongation at break, surface quality, heat deflection temperature and surface hardness. The key objective of this research is that by joining the results from the DoE with the requirements of the mold making, optimal and tailored molds can be additively manufactured with the PolyJet process. These additively manufactured molds can then be used in prototyping processes, in process testing and in small to medium batch production.

Keywords: Additive manufacturing, design of experiments, mold making, PolyJet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
1217 Indexing and Searching of Image Data in Multimedia Databases Using Axial Projection

Authors: Khalid A. Kaabneh

Abstract:

This paper introduces and studies new indexing techniques for content-based queries in images databases. Indexing is the key to providing sophisticated, accurate and fast searches for queries in image data. This research describes a new indexing approach, which depends on linear modeling of signals, using bases for modeling. A basis is a set of chosen images, and modeling an image is a least-squares approximation of the image as a linear combination of the basis images. The coefficients of the basis images are taken together to serve as index for that image. The paper describes the implementation of the indexing scheme, and presents the findings of our extensive evaluation that was conducted to optimize (1) the choice of the basis matrix (B), and (2) the size of the index A (N). Furthermore, we compare the performance of our indexing scheme with other schemes. Our results show that our scheme has significantly higher performance.

Keywords: Axial Projection, images, indexing, multimedia database, searching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
1216 Injunctions, Disjunctions, Remnants: The Reverse of Unity

Authors: Igor Guatelli

Abstract:

The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.

Keywords: Clearing, interstice, negative, remnant, spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 400
1215 Effect of Modified Atmosphere Packaging and Storage Temperatures on Quality of Shelled Raw Walnuts

Authors: M. Javanmard

Abstract:

This study was aimed at analyzing the effects of packaging (MAP) and preservation conditions on the packaged fresh walnut kernel quality. The central composite plan was used for evaluating the effect of oxygen (0–10%), carbon dioxide (0-10%), and temperature (4-26 °C) on qualitative characteristics of walnut kernels. Also, the response level technique was used to find the optimal conditions for interactive effects of factors, as well as estimating the best conditions of process using least amount of testing. Measured qualitative parameters were: peroxide index, color, decreased weight, mould and yeast counting test, and sensory evaluation. The results showed that the defined model for peroxide index, color, weight loss, and sensory evaluation is significant (p < 0.001), so that increase of temperature causes the peroxide value, color variation, and weight loss to increase and it reduces the overall acceptability of walnut kernels. An increase in oxygen percentage caused the color variation level and peroxide value to increase and resulted in lower overall acceptability of the walnuts. An increase in CO2 percentage caused the peroxide value to decrease, but did not significantly affect other indices (p ≥ 0.05). Mould and yeast were not found in any samples. Optimal packaging conditions to achieve maximum quality of walnuts include: 1.46% oxygen, 10% carbon dioxide, and temperature of 4 °C.

Keywords: Shelled walnut, MAP, quality, storage temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
1214 Gluten-Free Cookies Enriched with Blueberry Pomace: Optimization of Baking Process

Authors: Aleksandra Mišan, Bojana Šarić, Nataša Nedeljković, Mladenka Pestorić, Pavle Jovanov, Milica Pojić, Jelena Tomić, Bojana Filipčev, Miroslav Hadnađev, Anamarija Mandić

Abstract:

With the aim of improving nutritional profile and antioxidant capacity of gluten-free cookies, blueberry pomace, by-product of juice production, was processed into a new food ingredient by drying and grinding and used for a gluten-free cookie formulation. Since the quality of a baked product is highly influenced by the baking conditions, the objective of this work was to optimize the baking time and thickness of dough pieces, by applying Response Surface Methodology (RSM) in order to obtain the best technological quality of the cookies. The experiments were carried out according to a Central Composite Design (CCD) by selecting the dough thickness and baking time as independent variables, while hardness, color parameters (L*, a* and b* values), water activity, diameter and short/long ratio were response variables. According to the results of RSM analysis, the baking time of 13.74min and dough thickness of 4.08mm was found to be the optimal for the baking temperature of 170°C. As similar optimal parameters were obtained by previously conducted experiment based on sensory analysis, response surface methodology (RSM) can be considered as a suitable approach to optimize the baking process.

Keywords: Baking process, blueberry pomace, gluten-free cookies, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2533
1213 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468
1212 A Mixed Integer Linear Programming Model for Flexible Job Shop Scheduling Problem

Authors: Mohsen Ziaee

Abstract:

In this paper, a mixed integer linear programming (MILP) model is presented to solve the flexible job shop scheduling problem (FJSP). This problem is one of the hardest combinatorial problems. The objective considered is the minimization of the makespan. The computational results of the proposed MILP model were compared with those of the best known mathematical model in the literature in terms of the computational time. The results show that our model has better performance with respect to all the considered performance measures including relative percentage deviation (RPD) value, number of constraints, and total number of variables. By this improved mathematical model, larger FJS problems can be optimally solved in reasonable time, and therefore, the model would be a better tool for the performance evaluation of the approximation algorithms developed for the problem.

Keywords: Scheduling, flexible job shop, makespan, mixed integer linear programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1649
1211 Robust Nonlinear Control of Two Links Robot Manipulator and Computing Maximum Load

Authors: Hasanifard Goran, Habib Nejad Korayem Moharam, Nikoobin Amin

Abstract:

A new robust nonlinear control scheme of a manipulator is proposed in this paper which is robust against modeling errors and unknown disturbances. It is based on the principle of variable structure control, with sliding mode control (SMC) method. The variable structure control method is a robust method that appears to be well suited for robotic manipulators because it requers only bounds on the robotic arm parameters. But there is no single systematic procedure that is guaranteed to produce a suitable control law. Also, to reduce chattring of the control signal, we replaced the sgn function in the control law by a continuous approximation such as tangant function. We can compute the maximum load with regard to applied torque into joints. The effectivness of the proposed approach has been evaluated analitically demonstrated through computer simulations for the cases of variable load and robot arm parameters.

Keywords: Variable structure control, robust control, switching surface, robot manipulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
1210 A Control Model for Improving Safety and Efficiency of Navigation System Based on Reinforcement Learning

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Artificial Intelligence (AI), specifically Reinforcement Learning (RL), has proven helpful in many control path planning technologies by maximizing and enhancing their performance, such as navigation systems. Since it learns from experience by interacting with the environment to determine the optimal policy, the optimal policy takes the best action in a particular state, accounting for the long-term rewards. Most navigation systems focus primarily on "arriving faster," overlooking safety and efficiency while estimating the optimum path, as safety and efficiency are essential factors when planning for a long-distance journey. This paper represents an RL control model that proposes a control mechanism for improving navigation systems. Also, the model could be applied to other control path planning applications because it is adjustable and can accept different properties and parameters. However, the navigation system application has been taken as a case and evaluation study for the proposed model. The model utilized a Q-learning algorithm for training and updating the policy. It allows the agent to analyze the quality of an action made in the environment to maximize rewards. The model gives the ability to update rewards regularly based on safety and efficiency assessments, allowing the policy to consider the desired safety and efficiency benefits while making decisions, which improves the quality of the decisions taken for path planning compared to the conventional RL approaches.

Keywords: Artificial intelligence, control system, navigation systems, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162
1209 Temperature Dependent Interaction Energies among X (=Ru, Rh) Impurities in Pd-Rich PdX Alloys

Authors: M. Asato, C. Liu, N. Fujima, T. Hoshino, Y. Chen, T. Mohri

Abstract:

We study the temperature dependence of the interaction energies (IEs) of X (=Ru, Rh) impurities in Pd, due to the Fermi-Dirac (FD) distribution and the thermal vibration effect by the Debye-Grüneisen model. The n-body (n=2~4) IEs among X impurities in Pd, being used to calculate the internal energies in the free energies of the Pd-rich PdX alloys, are determined uniquely and successively from the lower-order to higher-order, by the full-potential Korringa-Kohn-Rostoker Green’s function method (FPKKR), combined with the generalized gradient approximation in the density functional theory. We found that the temperature dependence of IEs due to the FD distribution, being usually neglected, is very important to reproduce the X-concentration dependence of the observed solvus temperatures of the Pd-rich PdX (X=Ru, Rh) alloys.

Keywords: Full-potential KKR-Green’s function method, Fermi-Dirac distribution, GGA, phase diagram of Pd-rich PdX (X=Ru, Rh) alloys, thermal vibration effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 984
1208 Bose-Einstein Condensation in Neutral Many Bosonic System

Authors: M. Al-Sugheir, M. Sakhreya, G. Alna'washi, F. Al-Dweri

Abstract:

In this work, the condensation fraction and transition temperature of neutral many bosonic system are studied within the static fluctuation approximation (SFA). The effect of the potential parameters such as the strength and range on the condensate fraction was investigated. A model potential consisting of a repulsive step potential and an attractive potential well was used. As the potential strength or the core radius of the repulsive part increases, the condensation fraction is found to be decreased at the same temperature. Also, as the potential depth or the range of the attractive part increases, the condensation fraction is found to be increased. The transition temperature is decreased as the potential strength or the core radius of the repulsive part increases, and it increases as the potential depth or the range of the attractive part increases.

Keywords: About four key words or phrases in alphabetical order, separated by commas

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1185
1207 Optimal Consume of NaOH in Starches Gelatinization for Froth Flotation

Authors: André C. Silva, Débora N. Sousa, Elenice M. S. Silva, Thales P. Fontes, Raphael S. Tomaz

Abstract:

Starches are widely used as depressant in froth flotation operations in Brazil due to their efficiency, increasing the selectivity in the inverse flotation of quartz depressing iron ore. Starches market have been growing and improving in recent years, leading to better products attending the requirements of the mineral industry. The major source of starch used for iron ore is corn starch, which needs to be gelatinized with sodium hydroxide (NaOH) prior to use. This stage has a direct impact on industrials costs, once the lowest consumption of NaOH in gelatinization provides better control of the pH in the froth flotation and reduces the amount of electrolytes present in the pulp. In order to evaluate the gelatinization degree of different starches and flour were subjected to the addiction of NaOH and temperature variation experiments. Samples of starch (corn, cassava, HIPIX 100, HIPIX 101 and HIPIX 102 commercialized by Ingredion) and flour (cassava and potato) were tested. The starch samples were characterized through Scanning Electronic Microscopy and the amylose content were determined through spectrometry, swelling and solubility tests. The gelatinization was carried out through titration with NaOH, keeping the solution temperature constant at 40 oC. At the end of the tests, the optimal amount of NaOH consumed to gelatinize the starch or flour from different botanical sources was established and a correlation between the content of amylopectin in the starch and the starch/NaOH ratio needed for its gelatinization.

Keywords: Froth flotation, gelatinization, sodium hydroxide, starches and flours.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905
1206 Optimal Construction Using Multi-Criteria Decision-Making Methods

Authors: Masood Karamoozian, Zhang Hong

Abstract:

The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Precast Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts in university centers and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the PRCS was in the first rank, and the LSF system ranked second. Also, the PRCS, in terms of performance standards and economics, was ranked first, and the LSF system was allocated the first rank in terms of environmental standards.

Keywords: Multi-criteria decision making, AHP, SAW, TOPSIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208
1205 Numerical Methods versus Bjerksund and Stensland Approximations for American Options Pricing

Authors: Marasovic Branka, Aljinovic Zdravka, Poklepovic Tea

Abstract:

Numerical methods like binomial and trinomial trees and finite difference methods can be used to price a wide range of options contracts for which there are no known analytical solutions. American options are the most famous of that kind of options. Besides numerical methods, American options can be valued with the approximation formulas, like Bjerksund-Stensland formulas from 1993 and 2002. When the value of American option is approximated by Bjerksund-Stensland formulas, the computer time spent to carry out that calculation is very short. The computer time spent using numerical methods can vary from less than one second to several minutes or even hours. However to be able to conduct a comparative analysis of numerical methods and Bjerksund-Stensland formulas, we will limit computer calculation time of numerical method to less than one second. Therefore, we ask the question: Which method will be most accurate at nearly the same computer calculation time?

Keywords: Bjerksund and Stensland approximations, Computational analysis, Finance, Options pricing, Numerical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6024
1204 A New Analytical Approach to Reconstruct Residual Stresses Due to Turning Process

Authors: G.H. Farrahi, S.A. Faghidian, D.J. Smith

Abstract:

A thin layer on the component surface can be found with high tensile residual stresses, due to turning operations, which can dangerously affect the fatigue performance of the component. In this paper an analytical approach is presented to reconstruct the residual stress field from a limited incomplete set of measurements. Airy stress function is used as the primary unknown to directly solve the equilibrium equations and satisfying the boundary conditions. In this new method there exists the flexibility to impose the physical conditions that govern the behavior of residual stress to achieve a meaningful complete stress field. The analysis is also coupled to a least squares approximation and a regularization method to provide stability of the inverse problem. The power of this new method is then demonstrated by analyzing some experimental measurements and achieving a good agreement between the model prediction and the results obtained from residual stress measurement.

Keywords: Residual stress, Limited measurements, Inverse problems, Turning process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
1203 Modelling of Heating and Evaporation of Biodiesel Fuel Droplets

Authors: Mansour Al Qubeissi, Sergei S. Sazhin, Cyril Crua, Morgan R. Heikal

Abstract:

This paper presents the application of the Discrete Component Model for heating and evaporation to multi-component biodiesel fuel droplets in direct injection internal combustion engines. This model takes into account the effects of temperature gradient, recirculation and species diffusion inside droplets. A distinctive feature of the model used in the analysis is that it is based on the analytical solutions to the temperature and species diffusion equations inside the droplets. Nineteen types of biodiesel fuels are considered. It is shown that a simplistic model, based on the approximation of biodiesel fuel by a single component or ignoring the diffusion of components of biodiesel fuel, leads to noticeable errors in predicted droplet evaporation time and time evolution of droplet surface temperature and radius.

Keywords: Heat/Mass Transfer, Biodiesel, Multi-component Fuel, Droplet, Evaporation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777
1202 Optimal Design of Flat – Gain Wide-Band Discrete Raman Amplifiers

Authors: Banaz Omer Rasheed, Parexan M. Aljaff

Abstract:

In this paper, a wide band gain–flattened discrete Raman amplifiers utilizing four optimum pump wavelengths is demonstrated.

Keywords: Fiber Raman Amplifiers, Optimization, WaveLength Division Multiplexing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
1201 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

Authors: T. P. Athira, Gibin Chacko George

Abstract:

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
1200 Physical Properties of Uranium Dinitride UN2 by Using Density Functional Theory (DFT and DFT+U)

Authors: T. Zergoug, S.H. Abaidia, A. Nedjar, M. Y. Mokeddem

Abstract:

Physical properties of uranium dinitride (UN2) were investigated in detail using first principle calculations based on density functional theory (DFT). To study the strong correlation effects due to 5f uranium valence electrons, the on-site coulomb interaction correction U via the Hubbard-like term (DFT+U) was employed. The UN2 structural, mechanical and thermodynamic properties were calculated within DFT and Various U of DFT+U approach. The Perdew–Burke–Ernzerhof (PBE.5.2) version of the generalized gradient approximation (GGA) is used to describe the exchange-correlation with the projector-augmented wave (PAW) pseudo potentials. A comparative study shows that results are improved by using the Hubbard formalism for a certain U value correction like the structural parameter. For some physical properties the variation versus Hubbard-U is strong like Young modulus but for others it is weakly noticeable such as bulk modulus. We noticed also that from U=7.5 eV, elastic results don’t agree with the cubic cell because of the C44 values which turn out to be negative.

Keywords: Ab initio, bulk modulus, DFT, DFT + U.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2529
1199 The Application of HLLC Numerical Solver to the Reduced Multiphase Model

Authors: Fatma Ghangir, Andrzej F. Nowakowski, Franck C. G. A. Nicolleau, Thomas M. Michelitsch

Abstract:

The performance of high-resolution schemes is investigated for unsteady, inviscid and compressible multiphase flows. An Eulerian diffuse interface approach has been chosen for the simulation of multicomponent flow problems. The reduced fiveequation and seven equation models are used with HLL and HLLC approximation. The authors demonstrated the advantages and disadvantages of both seven equations and five equations models studying their performance with HLL and HLLC algorithms on simple test case. The seven equation model is based on two pressure, two velocity concept of Baer–Nunziato [10], while five equation model is based on the mixture velocity and pressure. The numerical evaluations of two variants of Riemann solvers have been conducted for the classical one-dimensional air-water shock tube and compared with analytical solution for error analysis.

Keywords: Multiphase flow, gas-liquid flow, Godunov schems, Riemann solvers, HLL scheme, HLLC scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2579
1198 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: Input performance, mobile device, slim keyboard, tactile feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
1197 Computational Studies of Binding Energies and Structures of Methylamine on Functionalized Activated Carbon Surfaces

Authors: R. C. J. Mphahlele, K. Bolton, H. Kasaini

Abstract:

Empirical force fields and density functional theory (DFT) was used to study the binding energies and structures of methylamine on the surface of activated carbons (ACs). This is a first step in studying the adsorption of alkyl amines on the surface of functionalized ACs. The force fields used were Dreiding (DFF), Universal (UFF) and Compass (CFF) models. The generalized gradient approximation with Perdew Wang 91 (PW91) functional was used for DFT calculations. In addition to obtaining the aminecarboxylic acid adsorption energies, the results were used to establish reliability of the empirical models for these systems. CFF predicted a binding energy of -9.227 (kcal/mol) which agreed with PW91 at - 13.17 (kcal/mol), compared to DFF 0 (kcal/mol) and UFF -0.72 (kcal/mol). However, the CFF binding energies for the amine to ester and ketone disagreed with PW91 results. The structures obtained from all models agreed with PW91 results.

Keywords: Activated Carbons, Binding energy, DFT, Force fields.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
1196 A Spanning Tree for Enhanced Cluster Based Routing in Wireless Sensor Network

Authors: M. Saravanan, M. Madheswaran

Abstract:

Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.

Keywords: Wireless sensor network, clustering, minimum spanning tree, genetic algorithm, low energy adaptive clustering hierarchy, simulated annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752
1195 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649
1194 Nanocomputing Memory Devices Formed from Carbon Nanotubes and Metallofulleres

Authors: Richard K. F. Lee, James M. Hill

Abstract:

In this paper, we summarize recent work of the authors on nanocomputing memory devices. We investigate two memory devices, each comprising a charged metallofullerene and carbon nanotubes. The first device involves two open nanotubes of the same radius that are joined by a centrally located nanotube of a smaller radius. A metallofullerene is then enclosed inside the structure. The second device also involves a etallofullerene that is located inside a closed carbon nanotube. Assuming the Lennard-Jones interaction energy and the continuum approximation, for both devices, the metallofullerene has two symmetrically placed equal minimum energy positions. On one side the metallofullerene represents the zero information state and by applying an external electrical field, it can overcome the energy barrier, and pass from one end of the tube to the other, where the metallofullerene then represents the one information state.

Keywords: Carbon nanotube, continuous approach, energy barrier, Lennard-Jones potential, metallofullerene, nanomemory device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
1193 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (RSM)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarseaggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: Mix proportioning, response surface methodology, compressive strength, optimal design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
1192 A Comparison Study of a Symmetry Solution of Magneto-Elastico-Viscous Fluid along a Semi- Infinite Plate with Homotopy Perturbation Method and4th Order Runge–Kutta Method

Authors: Mohamed M. Mousa, Aidarkhan Kaltayev

Abstract:

The equations governing the flow of an electrically conducting, incompressible viscous fluid over an infinite flat plate in the presence of a magnetic field are investigated using the homotopy perturbation method (HPM) with Padé approximants (PA) and 4th order Runge–Kutta method (4RKM). Approximate analytical and numerical solutions for the velocity field and heat transfer are obtained and compared with each other, showing excellent agreement. The effects of the magnetic parameter and Prandtl number on velocity field, shear stress, temperature and heat transfer are discussed as well.

Keywords: Electrically conducting elastico-viscous fluid, symmetry solution, Homotopy perturbation method, Padé approximation, 4th order Runge–Kutta, Maple

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448