Search results for: optimum data transfer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9066

Search results for: optimum data transfer

7866 Design a Low Voltage- Low Offset Class AB Op-Amp

Authors: B.Gholami, S.Gholami, A.Forouzantabar, Sh.Bazyari

Abstract:

A new design approach for three-stage operational amplifiers (op-amps) is proposed. It allows to actually implement a symmetrical push-pull class-AB amplifier output stage for wellestablished three-stage amplifiers using a feedforward transconductance stage. Compared with the conventional design practice, the proposed approach leads to a significant improvement of the symmetry between the positive and the negative op-amp step response, resulting in similar values of the positive/negative settling time. The new approach proves to be very useful in order to fully exploit the potentiality allowed by the op-amp in terms of speed performances. Design examples in a commercial 0.35-μm CMOS prove the effectiveness of theproposed strategy.

Keywords: Low-voltage op amp, design , optimum design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3545
7865 Multidimensional Visualization Tools for Analysis of Expression Data

Authors: Urska Cvek, Marjan Trutschl, Randolph Stone II, Zanobia Syed, John L. Clifford, Anita L. Sabichi

Abstract:

Expression data analysis is based mostly on the statistical approaches that are indispensable for the study of biological systems. Large amounts of multidimensional data resulting from the high-throughput technologies are not completely served by biostatistical techniques and are usually complemented with visual, knowledge discovery and other computational tools. In many cases, in biological systems we only speculate on the processes that are causing the changes, and it is the visual explorative analysis of data during which a hypothesis is formed. We would like to show the usability of multidimensional visualization tools and promote their use in life sciences. We survey and show some of the multidimensional visualization tools in the process of data exploration, such as parallel coordinates and radviz and we extend them by combining them with the self-organizing map algorithm. We use a time course data set of transitional cell carcinoma of the bladder in our examples. Analysis of data with these tools has the potential to uncover additional relationships and non-trivial structures.

Keywords: microarrays, visualization, parallel coordinates, radviz, self-organizing maps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
7864 A Multi-Agent Framework for Data Mining

Authors: Kamal Ali Albashiri, Khaled Ahmed Kadouh

Abstract:

A generic and extendible Multi-Agent Data Mining (MADM) framework, MADMF (the Multi-Agent Data Mining Framework) is described. The central feature of the framework is that it avoids the use of agreed meta-language formats by supporting a framework of wrappers. The advantage offered is that the framework is easily extendible, so that further data agents and mining agents can simply be added to the framework. A demonstration MADMF framework is currently available. The paper includes details of the MADMF architecture and the wrapper principle incorporated into it. A full description and evaluation of the framework-s operation is provided by considering two MADM scenarios.

Keywords: Multi-Agent Data Mining (MADM), Frequent Itemsets, Meta ARM, Association Rule Mining, Classifier generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2048
7863 Simulation of PM10 Source Apportionment at An Urban Site in Southern Taiwan by a Gaussian Trajectory Model

Authors: Chien-Lung Chen, Jeng-Lin Tsai, Feng-Chao Chung, Su-Ching Kuo, Kuo-Hsin Tseng, Pei-Hsuan Kuo, Li-Ying Hsieh, Ying I. Tsai

Abstract:

This study applied the Gaussian trajectory transfer-coefficient model (GTx) to simulate the particulate matter concentrations and the source apportionments at Nanzih Air Quality Monitoring Station in southern Taiwan from November 2007 to February 2008. The correlation coefficient between the observed and the calculated daily PM10 concentrations is 0.5 and the absolute bias of the PM10 concentrations is 24%. The simulated PM10 concentrations matched well with the observed data. Although the emission rate of PM10 was dominated by area sources (58%), the results of source apportionments indicated that the primary sources for PM10 at Nanzih Station were point sources (42%), area sources (20%) and then upwind boundary concentration (14%). The obvious difference of PM10 source apportionment between episode and non-episode days was upwind boundary concentrations which contributed to 20% and 11% PM10 sources, respectively. The gas-particle conversion of secondary aerosol and long range transport played crucial roles on the PM10 contribution to a receptor.

Keywords: back trajectory model, particulate matter, sourceapportionment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
7862 The Relevance of Data Warehousing and Data Mining in the Field of Evidence-based Medicine to Support Healthcare Decision Making

Authors: Nevena Stolba, A Min Tjoa

Abstract:

Evidence-based medicine is a new direction in modern healthcare. Its task is to prevent, diagnose and medicate diseases using medical evidence. Medical data about a large patient population is analyzed to perform healthcare management and medical research. In order to obtain the best evidence for a given disease, external clinical expertise as well as internal clinical experience must be available to the healthcare practitioners at right time and in the right manner. External evidence-based knowledge can not be applied directly to the patient without adjusting it to the patient-s health condition. We propose a data warehouse based approach as a suitable solution for the integration of external evidence-based data sources into the existing clinical information system and data mining techniques for finding appropriate therapy for a given patient and a given disease. Through integration of data warehousing, OLAP and data mining techniques in the healthcare area, an easy to use decision support platform, which supports decision making process of care givers and clinical managers, is built. We present three case studies, which show, that a clinical data warehouse that facilitates evidence-based medicine is a reliable, powerful and user-friendly platform for strategic decision making, which has a great relevance for the practice and acceptance of evidence-based medicine.

Keywords: data mining, data warehousing, decision-support systems, evidence-based medicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3774
7861 Design and Development of a Prototype Vehicle for Shell Eco-Marathon

Authors: S. S. Dol

Abstract:

Improvement in vehicle efficiency can reduce global fossil fuels consumptions. For that sole reason, Shell Global Corporation introduces Shell Eco-marathon where student teams require to design, build and test energy-efficient vehicles. Hence, this paper will focus on design processes and the development of a fuel economic vehicle which satisfying the requirements of the competition. In this project, three components are designed and analyzed, which are the body, chassis and powertrain of the vehicle. Optimum design for each component is produced through simulation analysis and theoretical calculation in which improvement is made as the project progresses.

Keywords: Energy efficient vehicle, drag force, chassis, powertrain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5711
7860 Simultaneous Saccharification and Fermentation(SSF) of Sugarcane Bagasse - Kinetics and Modeling

Authors: E.Sasikumar, T.Viruthagiri

Abstract:

Simultaneous Saccharification and Fermentation (SSF) of sugarcane bagasse by cellulase and Pachysolen tannophilus MTCC *1077 were investigated in the present study. Important process variables for ethanol production form pretreated bagasse were optimized using Response Surface Methodology (RSM) based on central composite design (CCD) experiments. A 23 five level CCD experiments with central and axial points was used to develop a statistical model for the optimization of process variables such as incubation temperature (25–45°) X1, pH (5.0–7.0) X2 and fermentation time (24–120 h) X3. Data obtained from RSM on ethanol production were subjected to the analysis of variance (ANOVA) and analyzed using a second order polynomial equation and contour plots were used to study the interactions among three relevant variables of the fermentation process. The fermentation experiments were carried out using an online monitored modular fermenter 2L capacity. The processing parameters setup for reaching a maximum response for ethanol production was obtained when applying the optimum values for temperature (32°C), pH (5.6) and fermentation time (110 h). Maximum ethanol concentration (3.36 g/l) was obtained from 50 g/l pretreated sugarcane bagasse at the optimized process conditions in aerobic batch fermentation. Kinetic models such as Monod, Modified Logistic model, Modified Logistic incorporated Leudeking – Piret model and Modified Logistic incorporated Modified Leudeking – Piret model have been evaluated and the constants were predicted.

Keywords: Sugarcane bagasse, ethanol, optimization, Pachysolen tannophilus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2279
7859 Investigation on Performance of Change Point Algorithm in Time Series Dynamical Regimes and Effect of Data Characteristics

Authors: Farhad Asadi, Mohammad Javad Mollakazemi

Abstract:

In this paper, Bayesian online inference in models of data series are constructed by change-points algorithm, which separated the observed time series into independent series and study the change and variation of the regime of the data with related statistical characteristics. variation of statistical characteristics of time series data often represent separated phenomena in the some dynamical system, like a change in state of brain dynamical reflected in EEG signal data measurement or a change in important regime of data in many dynamical system. In this paper, prediction algorithm for studying change point location in some time series data is simulated. It is verified that pattern of proposed distribution of data has important factor on simpler and smother fluctuation of hazard rate parameter and also for better identification of change point locations. Finally, the conditions of how the time series distribution effect on factors in this approach are explained and validated with different time series databases for some dynamical system.

Keywords: Time series, fluctuation in statistical characteristics, optimal learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
7858 AudioMine: Medical Data Mining in Heterogeneous Audiology Records

Authors: Shaun Cox, Michael Oakes, Stefan Wermter, Maurice Hawthorne

Abstract:

We report on the results of a pilot study in which a data-mining tool was developed for mining audiology records. The records were heterogeneous in that they contained numeric, category and textual data. The tools developed are designed to observe associations between any field in the records and any other field. The techniques employed were the statistical chi-squared test, and the use of self-organizing maps, an unsupervised neural learning approach.

Keywords: Audiology, data mining, chi-squared, self-organizing maps

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
7857 Evaluation of the Inhibitive Effect of Novel Quinoline Schiff Base on Corrosion of Mild Steel in HCl Solution

Authors: Smita Jauhari, Bhupendra Mistry

Abstract:

Schiff base (E)-2-methyl-N-(tetrazolo[1,5-a]quinolin-4-ylmethylene)aniline (QMA) was synthesized, and its inhibitive effect for mild steel in 1N HCl solution was investigated by weight loss measurement and electrochemical tests. From the weight loss measurements and electrochemical tests, it was observed that the inhibition efficiency increases with the increase in the Schiff base concentration and reaches a maximum at the optimum concentration. This is further confirmed by the decrease in corrosion rate. It is found that the system follows Langmuir adsorption isotherm.

Keywords: Schiff base, acid corrosion, electrochemical impedance spectroscopy, polarization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
7856 Fuzzy Types Clustering for Microarray Data

Authors: Seo Young Kim, Tai Myong Choi

Abstract:

The main goal of microarray experiments is to quantify the expression of every object on a slide as precisely as possible, with a further goal of clustering the objects. Recently, many studies have discussed clustering issues involving similar patterns of gene expression. This paper presents an application of fuzzy-type methods for clustering DNA microarray data that can be applied to typical comparisons. Clustering and analyses were performed on microarray and simulated data. The results show that fuzzy-possibility c-means clustering substantially improves the findings obtained by others.

Keywords: Clustering, microarray data, Fuzzy-type clustering, Validation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
7855 Universal Current-Mode OTA-C KHN Biquad

Authors: Dalibor Biolek, Viera Biolková, Zden─øk Kolka

Abstract:

A universal current-mode biquad is described which represents an economical variant of well-known KHN (Kerwin, Huelsman, Newcomb) voltage-mode filter. The circuit consists of two multiple-output OTAs and of two grounded capacitors. Utilizing simple splitter of the input current and a pair of jumpers, all the basic 2nd-order transfer functions can be implemented. The principle is verified by Spice simulation on the level of a CMOS structure of OTAs.

Keywords: Biquad, current mode, OTA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
7854 Identification of an Mechanism Systems by Using the Modified PSO Method

Authors: Chih-Cheng Kao, Hsin- Hua Chu

Abstract:

This paper mainly proposes an efficient modified particle swarm optimization (MPSO) method, to identify a slidercrank mechanism driven by a field-oriented PM synchronous motor. In system identification, we adopt the MPSO method to find parameters of the slider-crank mechanism. This new algorithm is added with “distance" term in the traditional PSO-s fitness function to avoid converging to a local optimum. It is found that the comparisons of numerical simulations and experimental results prove that the MPSO identification method for the slider-crank mechanism is feasible.

Keywords: Slider-crank mechanism, distance, systemidentification, modified particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
7853 Robust Regression and its Application in Financial Data Analysis

Authors: Mansoor Momeni, Mahmoud Dehghan Nayeri, Ali Faal Ghayoumi, Hoda Ghorbani

Abstract:

This research is aimed to describe the application of robust regression and its advantages over the least square regression method in analyzing financial data. To do this, relationship between earning per share, book value of equity per share and share price as price model and earning per share, annual change of earning per share and return of stock as return model is discussed using both robust and least square regressions, and finally the outcomes are compared. Comparing the results from the robust regression and the least square regression shows that the former can provide the possibility of a better and more realistic analysis owing to eliminating or reducing the contribution of outliers and influential data. Therefore, robust regression is recommended for getting more precise results in financial data analysis.

Keywords: Financial data analysis, Influential data, Outliers, Robust regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
7852 Hierarchical Checkpoint Protocol in Data Grids

Authors: Rahma Souli-Jbali, Minyar Sassi Hidri, Rahma Ben Ayed

Abstract:

Grid of computing nodes has emerged as a representative means of connecting distributed computers or resources scattered all over the world for the purpose of computing and distributed storage. Since fault tolerance becomes complex due to the availability of resources in decentralized grid environment, it can be used in connection with replication in data grids. The objective of our work is to present fault tolerance in data grids with data replication-driven model based on clustering. The performance of the protocol is evaluated with Omnet++ simulator. The computational results show the efficiency of our protocol in terms of recovery time and the number of process in rollbacks.

Keywords: Data grids, fault tolerance, chandy-lamport, clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921
7851 Fuzzy Based Problem-Solution Data Structureas a Data Oriented Model for ABS Controlling

Authors: Ahmad Habibizad Navin, Mehdi Naghian Fesharaki, Mohamad Teshnelab, Ehsan Shahamatnia

Abstract:

The anti-lock braking systems installed on vehicles for safe and effective braking, are high-order nonlinear and timevariant. Using fuzzy logic controllers increase efficiency of such systems, but impose a high computational complexity as well. The main concept introduced by this paper is reducing computational complexity of fuzzy controllers by deploying problem-solution data structure. Unlike conventional methods that are based on calculations, this approach is based on data oriented modeling.

Keywords: ABS, Fuzzy controller, PSDS, Time-Memory tradeoff, Data oriented modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
7850 Use of Bayesian Network in Information Extraction from Unstructured Data Sources

Authors: Quratulain N. Rajput, Sajjad Haider

Abstract:

This paper applies Bayesian Networks to support information extraction from unstructured, ungrammatical, and incoherent data sources for semantic annotation. A tool has been developed that combines ontologies, machine learning, and information extraction and probabilistic reasoning techniques to support the extraction process. Data acquisition is performed with the aid of knowledge specified in the form of ontology. Due to the variable size of information available on different data sources, it is often the case that the extracted data contains missing values for certain variables of interest. It is desirable in such situations to predict the missing values. The methodology, presented in this paper, first learns a Bayesian network from the training data and then uses it to predict missing data and to resolve conflicts. Experiments have been conducted to analyze the performance of the presented methodology. The results look promising as the methodology achieves high degree of precision and recall for information extraction and reasonably good accuracy for predicting missing values.

Keywords: Information Extraction, Bayesian Network, ontology, Machine Learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
7849 Data Acquisition from Cell Phone using Logical Approach

Authors: Keonwoo Kim, Dowon Hong, Kyoil Chung, Jae-Cheol Ryou

Abstract:

Cell phone forensics to acquire and analyze data in the cellular phone is nowadays being used in a national investigation organization and a private company. In order to collect cellular phone flash memory data, we have two methods. Firstly, it is a logical method which acquires files and directories from the file system of the cell phone flash memory. Secondly, we can get all data from bit-by-bit copy of entire physical memory using a low level access method. In this paper, we describe a forensic tool to acquire cell phone flash memory data using a logical level approach. By our tool, we can get EFS file system and peek memory data with an arbitrary region from Korea CDMA cell phone.

Keywords: Forensics, logical method, acquisition, cell phone, flash memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4090
7848 An Optimization Modelling to Evaluate Flights Scheduling at Tourist Airports

Authors: Dimitrios J. Dimitriou

Abstract:

Airport’s serving a tourist destination are an essential counterpart of the tourist demand supply chain, and their productivity is related to the region’s attractiveness and is enhanced by the air transport business. In this paper, the evaluation framework of the scheduled flights between two tourist airports is taken into consideration. By adopting a systemic approach, the arrivals from an airport that its connectivity heavily depended on the departures of another major airport are reviewed. The methodology framework, based on inventory control theory and the numerical example, promotes the use of the modelling formulation. The results would be essential for comparison and exercising to other similar cases.

Keywords: Airport connectivity, inventory control, optimization, optimum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 936
7847 Data Migration Methodology from Relational to NoSQL Databases

Authors: Mohamed Hanine, Abdesadik Bendarag, Omar Boutkhoum

Abstract:

Currently, the field of data migration is very topical. As the number of applications developed rapidly, the ever-increasing volume of data collected has driven the architectural migration from Relational Database Management System (RDBMS) to NoSQL (Not Only SQL) database. This very recent technology is important enough in the field of database management. The main aim of this paper is to present a methodology for data migration from RDBMS to NoSQL database. To illustrate this methodology, we implement a software prototype using MySQL as a RDBMS and MongoDB as a NoSQL database. Although this is a hard engineering work, our results show that the proposed methodology can successfully accomplish the goal of this study.

Keywords: Data Migration, MySQL, RDBMS, NoSQL, MongoDB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4337
7846 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map

Authors: Anurag Sharma, Christian W. Omlin

Abstract:

Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.

Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
7845 Data Hiding by Vector Quantization in Color Image

Authors: Yung-Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: Data hiding, vector quantization, watermark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
7844 Wet Strength Improvement of Pineapple Leaf Paper for Evaporative Cooling Pad

Authors: T. Khampan, N. Thavarungkul, J. Tiansuwan, S. Kamthai

Abstract:

This research aimed to modify pineapple leaf paper (PALP) for using as wet media in the evaporation cooling system by improving wet mechanical property (tensile strength) without compromising water absorption property. Polyamideamineepichorohydrin resin (PAE) and carboxymethylcellulose (CMC) were used to strengthen the paper, and the PAE and CMC ratio of 80:20 showed the optimum wet and dry tensile index values, which were higher than those of the commercial cooling pad (CCP). Compared with CCP, PALP itself and all the PAE/CMC modified PALP possessed better water absorption. The PAE/CMC modified PALP had potential to become a new type of wet media.

Keywords: wet strength, evaporative cooling, pineapple leaves, polyamideamine-epichorohydrin, carboxymethylcellulose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2142
7843 Approximate Range-Sum Queries over Data Cubes Using Cosine Transform

Authors: Wen-Chi Hou, Cheng Luo, Zhewei Jiang, Feng Yan

Abstract:

In this research, we propose to use the discrete cosine transform to approximate the cumulative distributions of data cube cells- values. The cosine transform is known to have a good energy compaction property and thus can approximate data distribution functions easily with small number of coefficients. The derived estimator is accurate and easy to update. We perform experiments to compare its performance with a well-known technique - the (Haar) wavelet. The experimental results show that the cosine transform performs much better than the wavelet in estimation accuracy, speed, space efficiency, and update easiness.

Keywords: DCT, Data Cube

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
7842 Digital filters for Hot-Mix Asphalt Complex Modulus Test Data Using Genetic Algorithm Strategies

Authors: Madhav V. Chitturi, Anshu Manik, Kasthurirangan Gopalakrishnan

Abstract:

The dynamic or complex modulus test is considered to be a mechanistically based laboratory test to reliably characterize the strength and load-resistance of Hot-Mix Asphalt (HMA) mixes used in the construction of roads. The most common observation is that the data collected from these tests are often noisy and somewhat non-sinusoidal. This hampers accurate analysis of the data to obtain engineering insight. The goal of the work presented in this paper is to develop and compare automated evolutionary computational techniques to filter test noise in the collection of data for the HMA complex modulus test. The results showed that the Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES) approach is computationally efficient for filtering data obtained from the HMA complex modulus test.

Keywords: HMA, dynamic modulus, GA, evolutionarycomputation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
7841 A Contribution to the Application of the Structural Analysis Method in Entrepreneurial Practice

Authors: Kamila Janovská, Šárka Vilamová, Petr Besta, Iveta Vozňáková, Roman Kozel

Abstract:

Quantitative methods of economic decision-making as the methodological base of the so called operational research represent an important set of tools for managing complex economic systems,both at the microeconomic level and on the macroeconomic scale. Mathematical models of controlled and controlling processes allow, by means of artificial experiments, obtaining information foroptimalor optimum approaching managerial decision-making.The quantitative methods of economic decision-making usually include a methodology known as structural analysis -an analysisof interdisciplinary production-consumption relations.

Keywords: economic decision-making, mathematical methods, structuralanalysis, technical coefficient

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1416
7840 A Study on the Heading of Spur Gears: Numerical Analysis and Experiments

Authors: M.Zadshakouyan, E.Abdi Sobbouhi, H.Jafarzadeh

Abstract:

In this study, the precision heading process of spur gears has been investigated by means of numerical analysis. The effect of some parameters such as teeth number and module on the forming force and material flow were presented. The simulation works were performed rigid-plastic finite element method using DEFORM 3D software. In order to validate the estimated numerical results, they were compared with those obtained experimentally during heading of spur gear using lead as a model material. Results showed that the optimum number of gear teeth is between 10 to 20, that is because of being the specific pressure in its minimum value.

Keywords: Heading, spur gear, numerical analysis, experiments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1932
7839 Reservoir Operating by Ant Colony Optimization for Continuous Domains (ACOR) Case Study: Dez Reservoir

Authors: A. B. Dariane, A. M. Moradi

Abstract:

A direct search approach to determine optimal reservoir operating is proposed with ant colony optimization for continuous domains (ACOR). The model is applied to a system of single reservoir to determine the optimum releases during 42 years of monthly steps. A disadvantage of ant colony based methods and the ACOR in particular, refers to great amount of computer run time consumption. In this study a highly effective procedure for decreasing run time has been developed. The results are compared to those of a GA based model.

Keywords: Ant colony optimization, continuous, metaheuristics, reservoir, decreasing run time, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003
7838 Equilibrium, Kinetic and Thermodynamic Studies of the Biosorption of Textile Dye (Yellow Bemacid) onto Brahea edulis

Authors: G. Henini, Y. Laidani, F. Souahi, A. Labbaci, S. Hanini

Abstract:

Environmental contamination is a major problem being faced by the society today. Industrial, agricultural, and domestic wastes, due to the rapid development in the technology, are discharged in the several receivers. Generally, this discharge is directed to the nearest water sources such as rivers, lakes, and seas. While the rates of development and waste production are not likely to diminish, efforts to control and dispose of wastes are appropriately rising. Wastewaters from textile industries represent a serious problem all over the world. They contain different types of synthetic dyes which are known to be a major source of environmental pollution in terms of both the volume of dye discharged and the effluent composition. From an environmental point of view, the removal of synthetic dyes is of great concern. Among several chemical and physical methods, adsorption is a promising technique due to the ease of use and low cost compared to other applications in the process of discoloration, especially if the adsorbent is inexpensive and readily available. The focus of the present study was to assess the potentiality of Brahea edulis (BE) for the removal of synthetic dye Yellow bemacid (YB) from aqueous solutions. The results obtained here may transfer to other dyes with a similar chemical structure. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, initial dye concentration, and temperature. The biosorption kinetic data of the material (BE) was tested by the pseudo first-order and the pseudo-second-order kinetic models. Thermodynamic parameters including the Gibbs free energy ΔG, enthalpy ΔH, and entropy ΔS have revealed that the adsorption of YB on the BE is feasible, spontaneous, and endothermic. The equilibrium data were analyzed by using Langmuir, Freundlich, Elovich, and Temkin isotherm models. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g: 12 mg/g; 1.5 g: 47.44 mg/g). The maximum biosorption occurred at around pH value of 2 for the YB. The equilibrium uptake was increased with an increase in the initial dye concentration in solution (Co = 120 mg/l; q = 35.97 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficient (R2 > 0.998) and a maximum monolayer adsorption capacity of 35.97 mg/g for YB.

Keywords: Adsorption, Brahea edulis, isotherm, yellow bemacid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1246
7837 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code

Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang

Abstract:

This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.

Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533