Search results for: Approach Tendency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5175

Search results for: Approach Tendency

4665 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741
4664 A Two-Stage Expert System for Diagnosis of Leukemia Based on Type-2 Fuzzy Logic

Authors: Ali Akbar Sadat Asl

Abstract:

Diagnosis and deciding about diseases in medical fields is facing innate uncertainty which can affect the whole process of treatment. This decision is made based on expert knowledge and the way in which an expert interprets the patient's condition, and the interpretation of the various experts from the patient's condition may be different. Fuzzy logic can provide mathematical modeling for many concepts, variables, and systems that are unclear and ambiguous and also it can provide a framework for reasoning, inference, control, and decision making in conditions of uncertainty. In systems with high uncertainty and high complexity, fuzzy logic is a suitable method for modeling. In this paper, we use type-2 fuzzy logic for uncertainty modeling that is in diagnosis of leukemia. The proposed system uses an indirect-direct approach and consists of two stages: In the first stage, the inference of blood test state is determined. In this step, we use an indirect approach where the rules are extracted automatically by implementing a clustering approach. In the second stage, signs of leukemia, duration of disease until its progress and the output of the first stage are combined and the final diagnosis of the system is obtained. In this stage, the system uses a direct approach and final diagnosis is determined by the expert. The obtained results show that the type-2 fuzzy expert system can diagnose leukemia with the average accuracy about 97%.

Keywords: Expert system, leukemia, medical diagnosis, type-2 fuzzy logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
4663 A New Fuzzy Mathematical Model in Recycling Collection Networks: A Possibilistic Approach

Authors: B. Vahdani, R. Tavakkoli-Moghaddam, A. Baboli, S. M. Mousavi

Abstract:

Focusing on the environmental issues, including the reduction of scrap and consumer residuals, along with the benefiting from the economic value during the life cycle of goods/products leads the companies to have an important competitive approach. The aim of this paper is to present a new mixed nonlinear facility locationallocation model in recycling collection networks by considering multi-echelon, multi-suppliers, multi-collection centers and multifacilities in the recycling network. To make an appropriate decision in reality, demands, returns, capacities, costs and distances, are regarded uncertain in our model. For this purpose, a fuzzy mathematical programming-based possibilistic approach is introduced as a solution methodology from the recent literature to solve the proposed mixed-nonlinear programming model (MNLP). The computational experiments are provided to illustrate the applicability of the designed model in a supply chain environment and to help the decision makers to facilitate their analysis.

Keywords: Location-allocation model, recycling collection networks, fuzzy mathematical programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2100
4662 Effect of Distributed Generators on the Optimal Operation of Distribution Networks

Authors: J. Olamaei , T. Niknam, M. Nayeripour

Abstract:

This paper presents an approach for daily optimal operation of distribution networks considering Distributed Generators (DGs). Due to private ownership of DGs, a cost based compensation method is used to encourage DGs in active and reactive power generation. The objective function is summation of electrical energy generated by DGs and substation bus (main bus) in the next day. A genetic algorithm is used to solve the optimal operation problem. The approach is tested on an IEEE34 buses distribution feeder.

Keywords: Distributed Generator, Daily Optimal Operation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
4661 Detecting Community Structure in Amino Acid Interaction Networks

Authors: Omar GACI, Stefan BALEV, Antoine DUTOT

Abstract:

In this paper we introduce the notion of protein interaction network. This is a graph whose vertices are the protein-s amino acids and whose edges are the interactions between them. Using a graph theory approach, we observe that according to their structural roles, the nodes interact differently. By leading a community structure detection, we confirm this specific behavior and describe thecommunities composition to finally propose a new approach to fold a protein interaction network.

Keywords: interaction network, protein structure, community structure detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
4660 Learning Spatio-Temporal Topology of a Multi-Camera Network by Tracking Multiple People

Authors: Yunyoung Nam, Junghun Ryu, Yoo-Joo Choi, We-Duke Cho

Abstract:

This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.

Keywords: Surveillance, multiple camera, people tracking, topology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
4659 Trajectory-Based Modified Policy Iteration

Authors: R. Sharma, M. Gopal

Abstract:

This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).

Keywords: Markov Decision Process (MDP), Mobile robot, Policy iteration, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
4658 A New Image Encryption Approach using Combinational Permutation Techniques

Authors: A. Mitra, Y. V. Subba Rao, S. R. M. Prasanna

Abstract:

This paper proposes a new approach for image encryption using a combination of different permutation techniques. The main idea behind the present work is that an image can be viewed as an arrangement of bits, pixels and blocks. The intelligible information present in an image is due to the correlations among the bits, pixels and blocks in a given arrangement. This perceivable information can be reduced by decreasing the correlation among the bits, pixels and blocks using certain permutation techniques. This paper presents an approach for a random combination of the aforementioned permutations for image encryption. From the results, it is observed that the permutation of bits is effective in significantly reducing the correlation thereby decreasing the perceptual information, whereas the permutation of pixels and blocks are good at producing higher level security compared to bit permutation. A random combination method employing all the three techniques thus is observed to be useful for tactical security applications, where protection is needed only against a casual observer.

Keywords: Encryption, Permutation, Good key, Combinationalpermutation, Pseudo random index generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2232
4657 Development of a Complex Meteorological Support System for UAVs

Authors: Z. Bottyán, F. Wantuch, A. Z. Gyöngyösi, Z. Tuba, K. Hadobács, P. Kardos, R. Kurunczi

Abstract:

The sensitivity of UAVs to the atmospheric effects are apparent. All the same the meteorological support for the UAVs missions is often non-adequate or partly missing. In our paper we show a new complex meteorological support system for different types of UAVs pilots, specialists and decision makers, too. The mentioned system has two important parts with different forecasts approach such as the statistical and dynamical ones. The statistical prediction approach is based on a large climatological data base and the special analog method which is able to select similar weather situations from the mentioned data base to apply them during the forecasting procedure. The applied dynamic approach uses the specific WRF model runs twice a day and produces 96 hours, high resolution weather forecast for the UAV users over the Hungary. An easy to use web-based system can give important weather information over the Carpathian basin in Central-Europe. The mentioned products can be reached via internet connection.

Keywords: Aviation meteorology, statistical weather prediction, unmanned aerial systems, WRF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2764
4656 Web Personalization to Build Trust in E-Commerce: A Design Science Approach

Authors: Choon Ling Sia, Yani Shi, Jiaqi Yan, Huaping Chen

Abstract:

With the development of the Internet, E-commerce is growing at an exponential rate, and lots of online stores are built up to sell their goods online. A major factor influencing the successful adoption of E-commerce is consumer-s trust. For new or unknown Internet business, consumers- lack of trust has been cited as a major barrier to its proliferation. As web sites provide key interface for consumer use of E-Commerce, we investigate the design of web site to build trust in E-Commerce from a design science approach. A conceptual model is proposed in this paper to describe the ontology of online transaction and human-computer interaction. Based on this conceptual model, we provide a personalized webpage design approach using Bayesian networks learning method. Experimental evaluation are designed to show the effectiveness of web personalization in improving consumer-s trust in new or unknown online store.

Keywords: Trust, Web site design, Human-ComputerInteraction, E-Commerce, Design science, Bayesian network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
4655 Transformation Method CIM to PIM: From Business Processes Models Defined in BPMN to Use Case and Class Models Defined in UML

Authors: Y. Rhazali, Y. Hadi, A. Mouloudi

Abstract:

This paper proposes a method to automatic transformation of CIM level to PIM level respecting the MDA approach. Our proposal is based on creating a good CIM level through well-defined rules allowing as achieving rich models that contain relevant information to facilitate the task of the transformation to the PIM level. We define, thereafter, an appropriate PIM level through various UML diagram. Next, we propose set rules to move from CIM to PIM. Our method follows the MDA approach by considering the business dimension in the CIM level through the use BPMN, standard modeling business of OMG, and the use of UML in PIM advocated by MDA in this level.

Keywords: Model transformation, MDA, CIM, PIM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3673
4654 Customers 50+ Behavior in the Financial Market in the Czech Republic

Authors: K. Matušínská, H. Starzyczná, M. Stoklasa

Abstract:

The paper deals with behaviour of the segment 50+ in the financial market in the Czech Republic. This segment could be said as the strong market power and it can be a crucial business potential for financial business units. The main defined objective of this paper is analysis of the customers´ behaviour of the segment 50- 60 years in the financial market in the Czech Republic and proposal making of the suitable marketing approach to satisfy their demands in the area of product, price, distribution and marketing communication policy. This paper is based on data from one part of primary marketing research. Paper determinates the basic problem areas as well as definition of financial services marketing, defining the primary research problem, hypothesis and primary research methodology. Finally suitable marketing approach to selected sub segment at age of 50-60 years is proposed according to marketing research findings.

Keywords: Population aging in the Czech Republic, Segment 50-60 years, Financial services marketing, Marketing research, Marketing approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2046
4653 Long-Term Structural Behavior of Resilient Materials for Reduction of Floor Impact Sound

Authors: J. Y. Lee, J. Kim, H. J. Chang, J. M. Kim

Abstract:

People’s tendency towards living in apartment houses is increasing in a densely populated country. However, some residents living in apartment houses are bothered by noise coming from the houses above. In order to reduce noise pollution, the communities are increasingly imposing a bylaw, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused on the specific long-time deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program consisted of testing nine floor sound insulation specimens subjected to sustained load for 45 days. Two main parameters were considered in the experimental investigation: three types of resilient materials and magnitudes of loads. The test results indicated that the structural behavior of the floor sound insulation systems under long-time load was quite different from that the systems under short-time load. The loading period increased the deflection of floor sound insulation systems and the increasing rate of the long-time deflection of the systems with ethylene vinyl acetate was smaller than that of the systems with low density ethylene polystyrene.

Keywords: Resilient materials, floor sound insulation systems, long-time deflection, sustained load, noise pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
4652 A Neural Computing-Based Approach for the Early Detection of Hepatocellular Carcinoma

Authors: Marina Gorunescu, Florin Gorunescu, Kenneth Revett

Abstract:

Hepatocellular carcinoma, also called hepatoma, most commonly appears in a patient with chronic viral hepatitis. In patients with a higher suspicion of HCC, such as small or subtle rising of serum enzymes levels, the best method of diagnosis involves a CT scan of the abdomen, but only at high cost. The aim of this study was to increase the ability of the physician to early detect HCC, using a probabilistic neural network-based approach, in order to save time and hospital resources.

Keywords: Early HCC diagnosis, probabilistic neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265
4651 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
4650 Using Visual Technologies to Promote Excellence in Computer Science Education

Authors: Carol B. Collins, M. H. N Tabrizi

Abstract:

The purposes of this paper are to (1) promote excellence in computer science by suggesting a cohesive innovative approach to fill well documented deficiencies in current computer science education, (2) justify (using the authors' and others anecdotal evidence from both the classroom and the real world) why this approach holds great potential to successfully eliminate the deficiencies, (3) invite other professionals to join the authors in proof of concept research. The authors' experiences, though anecdotal, strongly suggest that a new approach involving visual modeling technologies should allow computer science programs to retain a greater percentage of prospective and declared majors as students become more engaged learners, more successful problem-solvers, and better prepared as programmers. In addition, the graduates of such computer science programs will make greater contributions to the profession as skilled problem-solvers. Instead of wearily rememorizing code as they move to the next course, students will have the problem-solving skills to think and work in more sophisticated and creative ways.

Keywords: Algorithms, CASE, UML, Problem-solving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
4649 Resistance to Chloride Penetration of High Strength Self-Compacting Concretes: Pumice and Zeolite Effect

Authors: Kianoosh Samimi, Siham Kamali-Bernard, Ali Akbar Maghsoudi

Abstract:

This paper aims to contribute to the characterization and the understanding of fresh state, compressive strength and chloride penetration tendency of high strength self-compacting concretes (HSSCCs) where Portland cement type II is partially substituted by 10% and 15% of natural pumice and zeolite. First, five concrete mixtures with a control mixture without any pozzolan are prepared and tested in both fresh and hardened states. Then, resistance to chloride penetration for all formulation is investigated in non-steady state and steady state by measurement of chloride penetration and diffusion coefficient. In non-steady state, the correlation between initial current and chloride penetration with diffusion coefficient is studied. Moreover, the relationship between diffusion coefficient in non-steady state and electrical resistivity is determined. The concentration of free chloride ions is also measured in steady state. Finally, chloride penetration for all formulation is studied in immersion and tidal condition. The result shows that, the resistance to chloride penetration for HSSCC in immersion and tidal condition increases by incorporating pumice and zeolite. However, concrete with zeolite displays a better resistance. This paper shows that the HSSCC with 15% pumice and 10% zeolite is suitable in fresh, hardened, and durability characteristics.

Keywords: Chloride penetration, immersion, pumice, HSSCC, tidal, zeolite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 803
4648 Adaptive Fourier Decomposition Based Signal Instantaneous Frequency Computation Approach

Authors: Liming Zhang

Abstract:

There have been different approaches to compute the analytic instantaneous frequency with a variety of background reasoning and applicability in practice, as well as restrictions. This paper presents an adaptive Fourier decomposition and (α-counting) based instantaneous frequency computation approach. The adaptive Fourier decomposition is a recently proposed new signal decomposition approach. The instantaneous frequency can be computed through the so called mono-components decomposed by it. Due to the fast energy convergency, the highest frequency of the signal will be discarded by the adaptive Fourier decomposition, which represents the noise of the signal in most of the situation. A new instantaneous frequency definition for a large class of so-called simple waves is also proposed in this paper. Simple wave contains a wide range of signals for which the concept instantaneous frequency has a perfect physical sense. The α-counting instantaneous frequency can be used to compute the highest frequency for a signal. Combination of these two approaches one can obtain the IFs of the whole signal. An experiment is demonstrated the computation procedure with promising results.

Keywords: Adaptive Fourier decomposition, Fourier series, signal processing, instantaneous frequency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2361
4647 High Level Synthesis of Digital Filters Based On Sub-Token Forwarding

Authors: Iyad F. Jafar, Sandra J. Alrawashdeh, Ban K. Alhamayel

Abstract:

High level synthesis (HLS) is a process which generates register-transfer level design for digital systems from behavioral description. There are many HLS algorithms and commercial tools. However, most of these algorithms consider a behavioral description for the system when a single token is presented to the system. This approach does not exploit extra hardware efficiently, especially in the design of digital filters where common operations may exist between successive tokens. In this paper, we modify the behavioral description to process multiple tokens in parallel. However, this approach is unlike the full processing that requires full hardware replication. It exploits the presence of common operations between successive tokens. The performance of the proposed approach is better than sequential processing and approaches that of full parallel processing as the hardware resources are increased.

Keywords: Digital filters, High level synthesis, Sub-token forwarding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
4646 An Application of Extreme Value Theory as a Risk Measurement Approach in Frontier Markets

Authors: Dany Ng Cheong Vee, Preethee Nunkoo Gonpot, Noor-Ul-Hacq Sookia

Abstract:

In this paper, we consider the application of Extreme Value Theory as a risk measurement tool. The Value at Risk, for a set of indices, from six Stock Exchanges of Frontier markets is calculated using the Peaks over Threshold method and the performance of the model index-wise is evaluated using coverage tests and loss functions. Our results show that “fattailedness” alone of the data is not enough to justify the use of EVT as a VaR approach. The structure of the returns dynamics is also a determining factor. This approach works fine in markets which have had extremes occurring in the past thus making the model capable of coping with extremes coming up (Colombo, Tunisia and Zagreb Stock Exchanges). On the other hand, we find that indices with lower past than present volatility fail to adequately deal with future extremes (Mauritius and Kazakhstan). We also conclude that using EVT alone produces quite static VaR figures not reflecting the actual dynamics of the data.

Keywords: Extreme Value theory, Financial Crisis 2008, Frontier Markets, Value at Risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2386
4645 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach

Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.

Keywords: CO2 emissions, performance based design, optimization, sustainable design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
4644 Bottom Up Text Mining through Hierarchical Document Representation

Authors: Y. Djouadi., F. Souam.

Abstract:

Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.

Keywords: Graph based association rules mining, Hierarchical document structure, Text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
4643 Secret Communications Using Synchronized Sixth-Order Chuas's Circuits

Authors: López-Gutiérrez R.M., Rodríguez-Orozco E., Cruz-Hernández C., Inzunza-González E., Posadas-Castillo C., García-Guerrero E.E., Cardoza-Avendaño L.

Abstract:

In this paper, we use Generalized Hamiltonian systems approach to synchronize a modified sixth-order Chua's circuit, which generates hyperchaotic dynamics. Synchronization is obtained between the master and slave dynamics with the slave being given by an observer. We apply this approach to transmit private information (analog and binary), while the encoding remains potentially secure.

Keywords: Hyperchaos synchronization, sixth-order Chua's circuit, observers, simulation, secure communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
4642 Using Textual Pre-Processing and Text Mining to Create Semantic Links

Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo

Abstract:

This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.

Keywords: Semantic links, data mining, linked data, SKOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1065
4641 Two Approaches to Code Mobility in an Agent-based E-commerce System

Authors: Costin Badica, Maria Ganzha, Marcin Paprzycki

Abstract:

Recently, a model multi-agent e-commerce system based on mobile buyer agents and transfer of strategy modules was proposed. In this paper a different approach to code mobility is introduced, where agent mobility is replaced by local agent creation supplemented by similar code mobility as in the original proposal. UML diagrams of agents involved in the new approach to mobility and the augmented system activity diagram are presented and discussed.

Keywords: Agent system, agent mobility, code mobility, e-commerce, UML formalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
4640 A Parallel Architecture for the Real Time Correction of Stereoscopic Images

Authors: Zohir Irki, Michel Devy

Abstract:

In this paper, we will present an architecture for the implementation of a real time stereoscopic images correction's approach. This architecture is parallel and makes use of several memory blocs in which are memorized pre calculated data relating to the cameras used for the acquisition of images. The use of reduced images proves to be essential in the proposed approach; the suggested architecture must so be able to carry out the real time reduction of original images.

Keywords: Image reduction, Real-time correction, Parallel architecture, Parallel treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
4639 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, Six Sigma, Surface roughness, Taguchi methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
4638 A New Self-Adaptive EP Approach for ANN Weights Training

Authors: Kristina Davoian, Wolfram-M. Lippe

Abstract:

Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.

Keywords: Artificial Neural Networks (ANN), Learning Theory, Evolutionary Programming (EP), Mutation, Self-Adaptation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
4637 A Genetic-Algorithm-Based Approach for Audio Steganography

Authors: Mazdak Zamani , Azizah A. Manaf , Rabiah B. Ahmad , Akram M. Zeki , Shahidan Abdullah

Abstract:

In this paper, we present a novel, principled approach to resolve the remained problems of substitution technique of audio steganography. Using the proposed genetic algorithm, message bits are embedded into multiple, vague and higher LSB layers, resulting in increased robustness. The robustness specially would be increased against those intentional attacks which try to reveal the hidden message and also some unintentional attacks like noise addition as well.

Keywords: Artificial Intelligence, Audio Steganography, DataHiding, Genetic Algorithm, Substitution Techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3118
4636 Integrated Subset Split for Balancing Network Utilization and Quality of Routing

Authors: S. V. Kasmir Raja, P. Herbert Raj

Abstract:

The overlay approach has been widely used by many service providers for Traffic Engineering (TE) in large Internet backbones. In the overlay approach, logical connections are set up between edge nodes to form a full mesh virtual network on top of the physical topology. IP routing is then run over the virtual network. Traffic engineering objectives are achieved through carefully routing logical connections over the physical links. Although the overlay approach has been implemented in many operational networks, it has a number of well-known scaling issues. This paper proposes a new approach to achieve traffic engineering without full-mesh overlaying with the help of integrated approach and equal subset split method. Traffic engineering needs to determine the optimal routing of traffic over the existing network infrastructure by efficiently allocating resource in order to optimize traffic performance on an IP network. Even though constraint-based routing [1] of Multi-Protocol Label Switching (MPLS) is developed to address this need, since it is not widely tested or debugged, Internet Service Providers (ISPs) resort to TE methods under Open Shortest Path First (OSPF), which is the most commonly used intra-domain routing protocol. Determining OSPF link weights for optimal network performance is an NP-hard problem. As it is not possible to solve this problem, we present a subset split method to improve the efficiency and performance by minimizing the maximum link utilization in the network via a small number of link weight modifications. The results of this method are compared against results of MPLS architecture [9] and other heuristic methods.

Keywords: Constraint based routing, Link Utilization, Subsetsplit method and Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400