Search results for: Accumulative Sequence Complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1259

Search results for: Accumulative Sequence Complexity

299 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences

Authors: Alisha Khanal, Gokhan Saygili

Abstract:

It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.

Keywords: Seismic slope stability, sliding displacement, mainshock, aftershock, landslide, earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 841
298 Modeling of Knowledge-Intensive Business Processes

Authors: Eckhard M. Ammann

Abstract:

Knowledge development in companies relies on knowledge-intensive business processes, which are characterized by a high complexity in their execution, weak structuring, communication-oriented tasks and high decision autonomy, and often the need for creativity and innovation. A foundation of knowledge development is provided, which is based on a new conception of knowledge and knowledge dynamics. This conception consists of a three-dimensional model of knowledge with types, kinds and qualities. Built on this knowledge conception, knowledge dynamics is modeled with the help of general knowledge conversions between knowledge assets. Here knowledge dynamics is understood to cover all of acquisition, conversion, transfer, development and usage of knowledge. Through this conception we gain a sound basis for knowledge management and development in an enterprise. Especially the type dimension of knowledge, which categorizes it according to its internality and externality with respect to the human being, is crucial for enterprise knowledge management and development, because knowledge should be made available by converting it to more external types. Built on this conception, a modeling approach for knowledgeintensive business processes is introduced, be it human-driven,e-driven or task-driven processes. As an example for this approach, a model of the creative activity for the renewal planning of a product is given.

Keywords: Conception of knowledge, knowledge dynamics, modeling notation, knowledge-intensive business processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
297 Distributed Case Based Reasoning for Intelligent Tutoring System: An Agent Based Student Modeling Paradigm

Authors: O. P. Rishi, Rekha Govil, Madhavi Sinha

Abstract:

Online learning with Intelligent Tutoring System (ITS) is becoming very popular where the system models the student-s learning behavior and presents to the student the learning material (content, questions-answers, assignments) accordingly. In today-s distributed computing environment, the tutoring system can take advantage of networking to utilize the model for a student for students from other similar groups. In the present paper we present a methodology where using Case Based Reasoning (CBR), ITS provides student modeling for online learning in a distributed environment with the help of agents. The paper describes the approach, the architecture, and the agent characteristics for such system. This concept can be deployed to develop ITS where the tutor can author and the students can learn locally whereas the ITS can model the students- learning globally in a distributed environment. The advantage of such an approach is that both the learning material (domain knowledge) and student model can be globally distributed thus enhancing the efficiency of ITS with reducing the bandwidth requirement and complexity of the system.

Keywords: CBR, ITS, student modeling, distributed system, intelligent agent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128
296 Dynamic Metrics for Polymorphism in Object Oriented Systems

Authors: Parvinder Singh Sandhu, Gurdev Singh

Abstract:

Metrics is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. Software metrics are instruments or ways to measuring all the aspect of software product. These metrics are used throughout a software project to assist in estimation, quality control, productivity assessment, and project control. Object oriented software metrics focus on measurements that are applied to the class and other characteristics. These measurements convey the software engineer to the behavior of the software and how changes can be made that will reduce complexity and improve the continuing capability of the software. Object oriented software metric can be classified in two types static and dynamic. Static metrics are concerned with all the aspects of measuring by static analysis of software and dynamic metrics are concerned with all the measuring aspect of the software at run time. Major work done before, was focusing on static metric. Also some work has been done in the field of dynamic nature of the software measurements. But research in this area is demanding for more work. In this paper we give a set of dynamic metrics specifically for polymorphism in object oriented system.

Keywords: Metrics, Software, Quality, Object oriented system, Polymorphism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
295 Ensembling Adaptively Constructed Polynomial Regression Models

Authors: Gints Jekabsons

Abstract:

The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.

Keywords: Basis function construction, heuristic search, modelensembles, polynomial regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
294 HEXAFLY-INT Project: Design of a High Speed Flight Experiment

Authors: S. Di Benedetto, M. P. Di Donato, A. Rispoli, S. Cardone, J. Riehmer, J. Steelant, L. Vecchione

Abstract:

Thanks to a coordinated funding by the European Space Agency (ESA) and the European Commission (EC) within the 7th framework program, the High-Speed Experimental Fly Vehicles – International (HEXAFLY-INT) project is aimed at the flight validation of hypersonics technologies enabling future trans-atmospheric flights. The project, which is currently involving partners from Europe, Russian Federation and Australia operating under ESA/ESTEC coordination, will achieve the goal of designing, manufacturing, assembling and flight testing an unpowered high speed vehicle in a glider configuration by 2018. The main technical challenges of the project are specifically related to the design of the vehicle gliding configuration and to the complexity of integrating breakthrough technologies with standard aeronautical technologies, e.g. high temperature protection system and airframe cold structures. Also, the sonic boom impact, which is one of the environmental challenges of the high speed flight, will be assessed. This paper provides a comprehensive and detailed update on all the current projects activities carried out to date on both the vehicle and mission design.

Keywords: Design, flight testing, hypersonics, integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2272
293 Extended Well-Founded Semantics in Bilattices

Authors: Daniel Stamate

Abstract:

One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.

Keywords: Logic programs, imperfect information, multivalued logics, bilattices, assumptions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1223
292 The Application of Active Learning to Develop Creativity in General Education

Authors: Chalermwut Wijit

Abstract:

This research is conducted in order to 1) study the result of applying “Active Learning” in general education subject to develop creativity 2) explore problems and obstacles in applying Active Learning in general education subject to improve the creativity in 1780 undergraduate students who registered this subject in the first semester 2013. The research is implemented by allocating the students into several groups of 10 -15 students and assigning them to design the activities for society under the four main conditions including 1) require no financial resources 2) practical 3) can be attended by every student 4) must be accomplished within 2 weeks. The researcher evaluated the creativity prior and after the study. Ultimately, the problems and obstacles from creating activity are evaluated from the open-ended questions in the questionnaires. The study result states that overall average scores on students’ ability increased significantly in terms of creativity, analytical ability and the synthesis, the complexity of working plan and team working. It can be inferred from the outcome that active learning is one of the most efficient methods in developing creativity in general education.

Keywords: Creative Thinking, Active Learning, General Education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
291 Wind Tunnel for Aerodynamic Development Testing

Authors: E. T. L. Cöuras Ford, V. A. C. Vale, J. U. L. Mendes, F. A. Ribeiro

Abstract:

The study of the aerodynamics related to the improvement in the acting of airplanes and automobiles with the objective of being reduced the effect of the attrition of the air on structures, providing larger speeds and smaller consumption of fuel. The application of the knowledge of the aerodynamics not more limits to the aeronautical and automobile industries. Therefore, this research aims to design and construction of a wind tunnel to perform aerodynamic analysis in bodies of cars, seeking greater efficiency. Therefore, this research aims to design and construction of a wind tunnel to perform aerodynamic analysis in bodies of cars, seeking greater efficiency. For this, a methodology for wind tunnel type selection is designed to be built, taking into account the various existing configurations in which chose to build an open circuit tunnel, due to the lower complexity of construction and installation; operational simplicity and low cost. The guidelines for the project were teaching: the layer that limits study and analyze specimens with different geometries. For the variation of pressure in the test, section of a switched gauge used a pitot tube. Thus, it was possible to obtain quantitative and qualitative results, which proved to be satisfactory.

Keywords: Wind tunnel, Aerodynamics, Air.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1343
290 All Types of Base Pair Substitutions Induced by γ-Rays in Haploid and Diploid Yeast Cells

Authors: Natalia Koltovaya, Nadezhda Zhuchkina, Ksenia Lyubimova

Abstract:

We study the biological effects induced by ionizing radiation in view of therapeutic exposure and the idea of space flights beyond Earth's magnetosphere. In particular, we examine the differences between base pair substitution induction by ionizing radiation in model haploid and diploid yeast Saccharomyces cerevisiae cells. Such mutations are difficult to study in higher eukaryotic systems. In our research, we have used a collection of six isogenic trp5-strains and 14 isogenic haploid and diploid cyc1-strains that are specific markers of all possible base-pair substitutions. These strains differ from each other only in single base substitutions within codon-50 of the trp5 gene or codon-22 of the cyc1 gene. Different mutation spectra for two different haploid genetic trp5- and cyc1-assays and different mutation spectra for the same genetic cyc1-system in cells with different ploidy — haploid and diploid — have been obtained. It was linear function for dose-dependence in haploid and exponential in diploid cells. We suggest that the differences between haploid yeast strains reflect the dependence on the sequence context, while the differences between haploid and diploid strains reflect the different molecular mechanisms of mutations.

Keywords: Base pair substitutions, γ-rays, haploid and diploid cells, yeast Saccharomyces cerevisiae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 788
289 Effect of Atmospheric Turbulence on AcquisitionTime of Ground to Deep Space Optical Communication System

Authors: Hemani Kaushal, V.K.Jain, Subrat Kar

Abstract:

The performance of ground to deep space optical communication systems is degraded by distortion of the beam as it propagates through the turbulent atmosphere. Turbulence causes fluctuations in the intensity of the received signal which ultimately affects the acquisition time required to acquire and locate the spaceborne target using narrow laser beam. In this paper, performance of free-space optical (FSO) communication system in atmospheric turbulence has been analyzed in terms of acquisition time for coherent and non-coherent modulation schemes. Numerical results presented in graphical and tabular forms show that the acquisition time increases with the increase in turbulence level. This is true for both schemes. The BPSK has lowest acquisition time among all schemes. In non-coherent schemes, M-PPM performs better than the other schemes. With the increase in M, acquisition time becomes lower, but at the cost of increase in system complexity.

Keywords: Atmospheric Turbulence, Acquisition Time, BinaryPhase Shift Keying (BPSK), Free-Space Optical (FSO)Communication System, M-ary Pulse Position Modulation (M-PPM), Coherent/Non-coherent Modulation Schemes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736
288 CompleX-Machine: An Automated Testing Tool Using X-Machine Theory

Authors: E. K. A. Ogunshile

Abstract:

This paper is aimed at creating an Automatic Java X-Machine testing tool for software development. The nature of software development is changing; thus, the type of software testing tools required is also changing. Software is growing increasingly complex and, in part due to commercial impetus for faster software releases with new features and value, increasingly in danger of containing faults. These faults can incur huge cost for software development organisations and users; Cambridge Judge Business School’s research estimated the cost of software bugs to the global economy is $312 billion. Beyond the cost, faster software development methodologies and increasing expectations on developers to become testers is driving demand for faster, automated, and effective tools to prevent potential faults as early as possible in the software development lifecycle. Using X-Machine theory, this paper will explore a new tool to address software complexity, changing expectations on developers, faster development pressures and methodologies, with a view to reducing the huge cost of fixing software bugs.

Keywords: Conformance testing, finite state machine, software testing, X-Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1169
287 Evaluation of the Environmental Risk from the Co-Deposition of Waste Rock Material and Fly Ash

Authors: A. Mavrikos, N. Petsas, E. Kaltsi, D. Kaliampakos

Abstract:

The lignite-fired power plants in the Western Macedonia Lignite Center produce more than 8106 t of fly ash per year. Approximately 90% of this quantity is used for restoration-reclamation of exhausted open-cast lignite mines and slope stabilization of the overburden. The purpose of this work is to evaluate the environmental behavior of the mixture of waste rock and fly ash that is being used in the external deposition site of the South Field lignite mine. For this reason, a borehole was made within the site and 86 samples were taken and subjected to chemical analyses and leaching tests. The results showed very limited leaching of trace elements and heavy metals from this mixture. Moreover, when compared to the limit values set for waste acceptable in inert waste landfills, only few excesses were observed, indicating only minor risk for groundwater pollution. However, due to the complexity of both the leaching process and the contaminant pathway, more boreholes and analyses should be made in nearby locations and a systematic groundwater monitoring program should be implemented both downstream and within the external deposition site.

Keywords: Co-deposition, fly ash, leaching tests, lignite, waste rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
286 Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective

Authors: Swapnoneel Roy, Minhazur Rahman, Ashok Kumar Thakur

Abstract:

Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.

Keywords: Sorting Primitives, Genome Rearrangements, Transpositions, Block Interchanges, Strip Exchanges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
285 DAMQ-Based Approach for Efficiently Using the Buffer Spaces of a NoC Router

Authors: Mohammad Ali Jabraeil Jamali, Ahmad khademzadeh

Abstract:

In this paper we present high performance dynamically allocated multi-queue (DAMQ) buffer schemes for fault tolerance systems on chip applications that require an interconnection network. Two virtual channels shared the same buffer space. Fault tolerant mechanisms for interconnection networks are becoming a critical design issue for large massively parallel computers. It is also important to high performance SoCs as the system complexity keeps increasing rapidly. On the message switching layer, we make improvement to boost system performance when there are faults involved in the components communication. The proposed scheme is when a node or a physical channel is deemed as faulty, the previous hop node will terminate the buffer occupancy of messages destined to the failed link. The buffer usage decisions are made at switching layer without interactions with higher abstract layer, thus buffer space will be released to messages destined to other healthy nodes quickly. Therefore, the buffer space will be efficiently used in case fault occurs at some nodes.

Keywords: DAMQ, NoC, fault tolerant, odd-even routingalgorithm, buffer space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
284 Anomaly Detection using Neuro Fuzzy system

Authors: Fatemeh Amiri, Caro Lucas, Nasser Yazdani

Abstract:

As the network based technologies become omnipresent, demands to secure networks/systems against threat increase. One of the effective ways to achieve higher security is through the use of intrusion detection systems (IDS), which are a software tool to detect anomalous in the computer or network. In this paper, an IDS has been developed using an improved machine learning based algorithm, Locally Linear Neuro Fuzzy Model (LLNF) for classification whereas this model is originally used for system identification. A key technical challenge in IDS and LLNF learning is the curse of high dimensionality. Therefore a feature selection phase is proposed which is applicable to any IDS. While investigating the use of three feature selection algorithms, in this model, it is shown that adding feature selection phase reduces computational complexity of our model. Feature selection algorithms require the use of a feature goodness measure. The use of both a linear and a non-linear measure - linear correlation coefficient and mutual information- is investigated respectively

Keywords: anomaly Detection, feature selection, Locally Linear Neuro Fuzzy (LLNF), Mutual Information (MI), liner correlation coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139
283 NEAR: Visualizing Information Relations in Multimedia Repository A•VI•RE

Authors: Qian, C. Z., Chen, V. Y., R. F. Woodbury

Abstract:

This paper describes the NEAR (Navigating Exhibitions, Annotations and Resources) panel, a novel interactive visualization technique designed to help people navigate and interpret groups of resources, exhibitions and annotations by revealing hidden relations such as similarities and references. NEAR is implemented on A•VI•RE, an extended online information repository. A•VI•RE supports a semi-structured collection of exhibitions containing various resources and annotations. Users are encouraged to contribute, share, annotate and interpret resources in the system by building their own exhibitions and annotations. However, it is hard to navigate smoothly and efficiently in A•VI•RE because of its high capacity and complexity. We present a visual panel that implements new navigation and communication approaches that support discovery of implied relations. By quickly scanning and interacting with NEAR, users can see not only implied relations but also potential connections among different data elements. NEAR was tested by several users in the A•VI•RE system and shown to be a supportive navigation tool. In the paper, we further analyze the design, report the evaluation and consider its usage in other applications.

Keywords: measure similarity, trace reference, inherentrelation, information visualization, online multimedia repository

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1249
282 Dynamic Capitalization and Visualization Strategy in Collaborative Knowledge Management System for EI Process

Authors: Bolanle F. Oladejo, Victor T. Odumuyiwa, Amos A. David

Abstract:

Knowledge is attributed to human whose problemsolving behavior is subjective and complex. In today-s knowledge economy, the need to manage knowledge produced by a community of actors cannot be overemphasized. This is due to the fact that actors possess some level of tacit knowledge which is generally difficult to articulate. Problem-solving requires searching and sharing of knowledge among a group of actors in a particular context. Knowledge expressed within the context of a problem resolution must be capitalized for future reuse. In this paper, an approach that permits dynamic capitalization of relevant and reliable actors- knowledge in solving decision problem following Economic Intelligence process is proposed. Knowledge annotation method and temporal attributes are used for handling the complexity in the communication among actors and in contextualizing expressed knowledge. A prototype is built to demonstrate the functionalities of a collaborative Knowledge Management system based on this approach. It is tested with sample cases and the result showed that dynamic capitalization leads to knowledge validation hence increasing reliability of captured knowledge for reuse. The system can be adapted to various domains.

Keywords: Actors' communication, knowledge annotation, recursive knowledge capitalization, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1335
281 Specialized Reduced Models of Dynamic Flows in 2-Stroke Engines

Authors: S. Cagin, X. Fischer, E. Delacourt, N. Bourabaa, C. Morin, D. Coutellier, B. Carré, S. Loumé

Abstract:

The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).

Keywords: Diesel engine, Design optimization, Model reduction, Neural network, NTF algorithm, Scavenging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1278
280 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks

Authors: Siliang Wang, Minghui Wang, Jun Hu

Abstract:

A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.

Keywords: pruning method, stochastic, time-varying networks, optimal path planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
279 Corporate Governance Practices and Audit Quality: An Empirical Study of the Listed Companies in Egypt

Authors: Mohamed Moustafa Soliman, Mohamed Abd Elsalam

Abstract:

Recent financial international scandals around the world have led to a number of investigations into the effectiveness of corporate governance practices and audit quality. Although evidence of corporate governance practices and audit quality exists from developed economies, very scanty studies have been conducted in Egypt where corporate governance is just evolving. Therefore, this study provides evidence on the effectiveness of corporate governance practices and audit quality from a developing country. The data for analysis are gathered from the top 50 most active companies in the Egyptian Stock Exchange, covering the three year period 2007-2009. Logistic regression was used in investigating the questions that were raised in the study. Findings from the study show that board independence; CEO duality and audit committees significantly have relationship with audit quality. The results also, indicate that institutional investor and managerial ownership have no significantly relationship with audit quality. Evidence also exist that size of the company; complexity and business leverage are important factors in audit quality for companies quoted on the Egypt Stock Exchange.

Keywords: Corporate governance, Boards of directors, corporate ownership, Audit Committees, Audit quality, and Egypt.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3877
278 An Algorithm of Finite Capacity Material Requirement Planning System for Multi-stage Assembly Flow Shop

Authors: T. Wuttipornpun, U. Wangrakdiskul, W. Songserm

Abstract:

This paper aims to develop an algorithm of finite capacity material requirement planning (FCMRP) system for a multistage assembly flow shop. The developed FCMRP system has two main stages. The first stage is to allocate operations to the first and second priority work centers and also determine the sequence of the operations on each work center. The second stage is to determine the optimal start time of each operation by using a linear programming model. Real data from a factory is used to analyze and evaluate the effectiveness of the proposed FCMRP system and also to guarantee a practical solution to the user. There are five performance measures, namely, the total tardiness, the number of tardy orders, the total earliness, the number of early orders, and the average flow-time. The proposed FCMRP system offers an adjustable solution which is a compromised solution among the conflicting performance measures. The user can adjust the weight of each performance measure to obtain the desired performance. The result shows that the combination of FCMRP NP3 and EDD outperforms other combinations in term of overall performance index. The calculation time for the proposed FCMRP system is about 10 minutes which is practical for the planners of the factory.

Keywords: Material requirement planning, Finite capacity, Linear programming, Permutation, Application in industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
277 Faster FPGA Routing Solution using DNA Computing

Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon

Abstract:

There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.

Keywords: FPGA, Routing, DNA Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
276 Multi-Modal Visualization of Working Instructions for Assembly Operations

Authors: Josef Wolfartsberger, Michael Heiml, Georg Schwarz, Sabrina Egger

Abstract:

Growing individualization and higher numbers of variants in industrial assembly products raise the complexity of manufacturing processes. Technical assistance systems considering both procedural and human factors allow for an increase in product quality and a decrease in required learning times by supporting workers with precise working instructions. Due to varying needs of workers, the presentation of working instructions leads to several challenges. This paper presents an approach for a multi-modal visualization application to support assembly work of complex parts. Our approach is integrated within an interconnected assistance system network and supports the presentation of cloud-streamed textual instructions, images, videos, 3D animations and audio files along with multi-modal user interaction, customizable UI, multi-platform support (e.g. tablet-PC, TV screen, smartphone or Augmented Reality devices), automated text translation and speech synthesis. The worker benefits from more accessible and up-to-date instructions presented in an easy-to-read way.

Keywords: Assembly, assistive technologies, augmented reality, manufacturing, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
275 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home

Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.

Keywords: Situation-awareness, Smart home, IoT, Machine learning, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815
274 Performances Comparison of Neural Architectures for On-Line Speed Estimation in Sensorless IM Drives

Authors: K.Sedhuraman, S.Himavathi, A.Muthuramalingam

Abstract:

The performance of sensor-less controlled induction motor drive depends on the accuracy of the estimated speed. Conventional estimation techniques being mathematically complex require more execution time resulting in poor dynamic response. The nonlinear mapping capability and powerful learning algorithms of neural network provides a promising alternative for on-line speed estimation. The on-line speed estimator requires the NN model to be accurate, simpler in design, structurally compact and computationally less complex to ensure faster execution and effective control in real time implementation. This in turn to a large extent depends on the type of Neural Architecture. This paper investigates three types of neural architectures for on-line speed estimation and their performance is compared in terms of accuracy, structural compactness, computational complexity and execution time. The suitable neural architecture for on-line speed estimation is identified and the promising results obtained are presented.

Keywords: Sensorless IM drives, rotor speed estimators, artificial neural network, feed- forward architecture, single neuron cascaded architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
273 Quality Service Standard of Food and Beverage Service Staff in Hotel

Authors: Thanasit Suksutdhi

Abstract:

This survey research aims to study the standard of service quality of food and beverage service staffs in hotel business by studying the service standard of three sample hotels, Siam Kempinski Hotel Bangkok, Four Seasons Resort Chiang Mai, and Banyan Tree Phuket. In order to find the international service standard of food and beverage service, triangular research, i.e. quantitative, qualitative, and survey were employed. In this research, questionnaires and in-depth interview were used for getting the information on the sequences and method of services. There were three parts of modified questionnaires to measure service quality and guest’s satisfaction including service facilities, attentiveness, responsibility, reliability, and circumspection. This study used sample random sampling to derive subjects with the return rate of the questionnaires was 70% or 280. Data were analyzed by SPSS to find arithmetic mean, SD, percentage, and comparison by t-test and One-way ANOVA. The results revealed that the service quality of the three hotels were in the international level which could create high satisfaction to the international customers. Recommendations for research implementations were to maintain the area of good service quality, and to improve some dimensions of service quality such as reliability. Training in service standard, product knowledge, and new technology for employees should be provided. Furthermore, in order to develop the service quality of the industry, training collaboration between hotel organization and educational institutions in food and beverage service should be considered.

Keywords: Service standard, food and beverage department, sequence of service, service method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7760
272 A Differential Calculus Based Image Steganography with Crossover

Authors: Srilekha Mukherjee, Subha Ash, Goutam Sanyal

Abstract:

Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.

Keywords: Steganography, Crossover, Differential Calculus, Peak Signal to Noise Ratio, Cross-correlation Coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
271 Interoperability in Component Based Software Development

Authors: M. Madiajagan, B. Vijayakumar

Abstract:

The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.

Keywords: Interoperability, component packaging, communication technology, heterogeneous platform, component interface, middleware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2748
270 Secure Hashing Algorithm and Advance Encryption Algorithm in Cloud Computing

Authors: Jaimin Patel

Abstract:

Cloud computing is one of the most sharp and important movement in various computing technologies. It provides flexibility to users, cost effectiveness, location independence, easy maintenance, enables multitenancy, drastic performance improvements, and increased productivity. On the other hand, there are also major issues like security. Being a common server, security for a cloud is a major issue; it is important to provide security to protect user’s private data, and it is especially important in e-commerce and social networks. In this paper, encryption algorithms such as Advanced Encryption Standard algorithms, their vulnerabilities, risk of attacks, optimal time and complexity management and comparison with other algorithms based on software implementation is proposed. Encryption techniques to improve the performance of AES algorithms and to reduce risk management are given. Secure Hash Algorithms, their vulnerabilities, software implementations, risk of attacks and comparison with other hashing algorithms as well as the advantages and disadvantages between hashing techniques and encryption are given.

Keywords: Cloud computing, encryption algorithm, secure hashing algorithm, brute force attack, birthday attack, plaintext attack, man-in-the-middle attack.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693