Search results for: Balanced Incomplete Block Designs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1072

Search results for: Balanced Incomplete Block Designs

952 An Analysis of Uncoupled Designs in Chicken Egg

Authors: Pratap Sriram Sundar, Chandan Chowdhury, Sagar Kamarthi

Abstract:

Nature has perfected her designs over 3.5 billion years of evolution. Research fields such as biomimicry, biomimetics, bionics, bio-inspired computing, and nature-inspired designs have explored nature-made artifacts and systems to understand nature’s mechanisms and intelligence. Learning from nature, the researchers have generated sustainable designs and innovation in a variety of fields such as energy, architecture, agriculture, transportation, communication, and medicine. Axiomatic design offers a method to judge if a design is good. This paper analyzes design aspects of one of the nature’s amazing object: chicken egg. The functional requirements (FRs) of components of the object are tabulated and mapped on to nature-chosen design parameters (DPs). The ‘independence axiom’ of the axiomatic design methodology is applied to analyze couplings and to evaluate if eggs’ design is good (i.e., uncoupled design) or bad (i.e., coupled design). The analysis revealed that eggs design is a good design, i.e., uncoupled design. This approach can be applied to any nature’s artifacts to judge whether their design is a good or a bad. This methodology is valuable for biomimicry studies. This approach can also be a very useful teaching design consideration of biology and bio-inspired innovation.

Keywords: Uncoupled design, axiomatic design, nature design, design evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581
951 Reverse Twin Block with Expansion Screw for Treatment of Skeletal Class III Malocclusion in Growing Patient: Case Report

Authors: Alfrina Marwan, Erna Sulistyawati

Abstract:

Class III malocclusion shows both skeletal and dentoalveolar component. Sketal Class III malocclusion can have variants in different region, maxilla or mandibular. Skeletal Class III malocclusion during growth period is considered to treat to prevent its severity in adulthood. Orthopedics treatment of skeletal Class III malocclusion in growing patient can be treated by using reverse twin block with expansion screw to modify the growth pattern. The objective of this case report was to describe the functional correction of skeletal Class III maloclussion using reverse twin block with expansion screw in growing patient. A patient with concave profile came with a chief complaint of aesthetic problems. The cephalometric analysis showed that patient had skeletal Class III malocclusion (ANB -50, SNA 75º, Wits appraisal -3 mm) with anterior cross bite and deep bite (overjet -3 mm, overbite 6 mm). In this case report, the patient was treated with reverse twin block appliance with expansion screw. After three months of treatment, the skeletal problems have been corrected (ANB -1°), overjet, overbite and aesthetic were improved. Reverse twin block appliance with expansion screw can be used as orthopedics treatment for skeletal Class III malocclusion in growing patient and can improve the aesthetic with great satisfaction which was the main complaint in this patient.

Keywords: Growing patient, maxilla retrognatism, reverse twin blocks, skeletal Class III malocclusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 899
950 A 5-V to 30-V Current-Mode Boost Converter with Integrated Current Sensor and Power-on Protection

Authors: Jun Yu, Yat-Hei Lam, Boris Grinberg, Kevin Chai Tshun Chuan

Abstract:

This paper presents a 5-V to 30-V current-mode boost converter for powering the drive circuit of a micro-electro-mechanical sensor. The design of a transconductance amplifier and an integrated current sensing circuit are presented. In addition, essential building blocks for power-on protection such as a soft-start and clamp block and supply and clock ready block are discussed in details. The chip is fabricated in a 0.18-μm CMOS process. Measurement results show that the soft-start and clamp block can effectively limit the inrush current during startup and protect the boost converter from startup failure.

Keywords: Boost Converter, Current Sensing, Power-on protection, Step-up Converter, Soft-start.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003
949 Viability of Rice Husk Ash Concrete Brick/Block from Green Electricity in Bangladesh

Authors: Mohammad A. N. M. Shafiqul Karim

Abstract:

As a developing country, Bangladesh has to face numerous challenges. Self Independence in electricity, contributing to climate change by reducing carbon emission and bringing the backward population of society to the mainstream is more challenging for them. Therefore, it is essential to ensure recycled use of local products to the maximum level in every sector. Some private organizations have already worked alongside government to bring the backward population to the mainstream by developing their financial capacities. As rice husk is the largest single category of the total energy supply in Bangladesh. As part of this strategy, rice husk can play a great as a promising renewable energy source, which is readily available, has considerable environmental benefits and can produce electricity and ensure multiple uses of byproducts in construction technology. For the first time in Bangladesh, an experimental multidimensional project depending on Rice Husk Electricity and Rice Husk Ash (RHA) concrete brick/block under Green Eco-Tech Limited has already been started. Project analysis, opportunity, sustainability, the high monitoring component, limitations and finally evaluated data reflecting the viability of establishing more projects using rice husk are discussed in this paper. The by-product of rice husk from the production of green electricity, RHA, can be used for making, in particular, RHA concrete brick/block in Bangladeshi aspects is also discussed here.

Keywords: Project analysis, rice husk, rice husk ash concrete brick/block, compressive strength of rice husk ash concrete brick/block.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022
948 Adaptive Motion Estimator Based on Variable Block Size Scheme

Authors: S. Dhahri, A. Zitouni, H. Chaouch, R. Tourki

Abstract:

This paper presents an adaptive motion estimator that can be dynamically reconfigured by the best algorithm depending on the variation of the video nature during the lifetime of an application under running. The 4 Step Search (4SS) and the Gradient Search (GS) algorithms are integrated in the estimator in order to be used in the case of rapid and slow video sequences respectively. The Full Search Block Matching (FSBM) algorithm has been also integrated in order to be used in the case of the video sequences which are not real time oriented. In order to efficiently reduce the computational cost while achieving better visual quality with low cost power, the proposed motion estimator is based on a Variable Block Size (VBS) scheme that uses only the 16x16, 16x8, 8x16 and 8x8 modes. Experimental results show that the adaptive motion estimator allows better results in term of Peak Signal to Noise Ratio (PSNR), computational cost, FPGA occupied area, and dissipated power relatively to the most popular variable block size schemes presented in the literature.

Keywords: H264, Configurable Motion Estimator, VariableBlock Size, PSNR, Dissipated power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
947 A New Application of Stochastic Transformation

Authors: Nilar Win Kyaw

Abstract:

In cryptography, confusion and diffusion are very important to get confidentiality and privacy of message in block ciphers and stream ciphers. There are two types of network to provide confusion and diffusion properties of message in block ciphers. They are Substitution- Permutation network (S-P network), and Feistel network. NLFS (Non-Linear feedback stream cipher) is a fast and secure stream cipher for software application. NLFS have two modes basic mode that is synchronous mode and self synchronous mode. Real random numbers are non-deterministic. R-box (random box) based on the dynamic properties and it performs the stochastic transformation of data that can be used effectively meet the challenges of information is protected from international destructive impacts. In this paper, a new implementation of stochastic transformation will be proposed.

Keywords: S-P network, Feistel network, R-block, stochastic transformation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472
946 Performance Analysis of Quantum Cascaded Lasers

Authors: M. B. El_Mashade, I. I. Mahamoud, M. S. El_Tokhy

Abstract:

Improving the performance of the QCL through block diagram as well as mathematical models is the main scope of this paper. In order to enhance the performance of the underlined device, the mathematical model parameters are used in a reliable manner in such a way that the optimum behavior was achieved. These parameters play the central role in specifying the optical characteristics of the considered laser source. Moreover, it is important to have a large amount of radiated power, where increasing the amount of radiated power represents the main hopping process that can be predicted from the behavior of quantum laser devices. It was found that there is a good agreement between the calculated values from our mathematical model and those obtained with VisSim and experimental results. These demonstrate the strength of mplementation of both mathematical and block diagram models.

Keywords: Quantum Cascaded Lasers (QCLs), Modeling, Block Diagram Programming, Intersubband transitions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
945 A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations

Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz

Abstract:

Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
944 New VLSI Architecture for Motion Estimation Algorithm

Authors: V. S. K. Reddy, S. Sengupta, Y. M. Latha

Abstract:

This paper presents an efficient VLSI architecture design to achieve real time video processing using Full-Search Block Matching (FSBM) algorithm. The design employs parallel bank architecture with minimum latency, maximum throughput, and full hardware utilization. We use nine parallel processors in our architecture and each controlled by a state machine. State machine control implementation makes the design very simple and cost effective. The design is implemented using VHDL and the programming techniques we incorporated makes the design completely programmable in the sense that the search ranges and the block sizes can be varied to suit any given requirements. The design can operate at frequencies up to 36 MHz and it can function in QCIF and CIF video resolution at 1.46 MHz and 5.86 MHz, respectively.

Keywords: Video Coding, Motion Estimation, Full-Search, Block-Matching, VLSI Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
943 A Preliminary Study for Design of Automatic Block Reallocation Algorithm with Genetic Algorithm Method in the Land Consolidation Projects

Authors: Tayfun Çay, Yaşar İnceyol, Abdurrahman Özbeyaz

Abstract:

Land reallocation is one of the most important steps in land consolidation projects. Many different models were proposed for land reallocation in the literature such as Fuzzy Logic, block priority based land reallocation and Spatial Decision Support Systems. A model including four parts is considered for automatic block reallocation with genetic algorithm method in land consolidation projects. These stages are preparing data tables for a project land, determining conditions and constraints of land reallocation, designing command steps and logical flow chart of reallocation algorithm and finally writing program codes of Genetic Algorithm respectively. In this study, we designed the first three steps of the considered model comprising four steps.

Keywords: Genetic algorithm, land consolidation, landholding, land reallocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854
942 An Optimized Multi-block Method for Turbulent Flows

Authors: M. Goodarzi, P. Lashgari

Abstract:

A major part of the flow field involves no complicated turbulent behavior in many turbulent flows. In this research work, in order to reduce required memory and CPU time, the flow field was decomposed into several blocks, each block including its special turbulence. A two dimensional backward facing step was considered here. Four combinations of the Prandtl mixing length and standard k- E models were implemented as well. Computer memory and CPU time consumption in addition to numerical convergence and accuracy of the obtained results were mainly investigated. Observations showed that, a suitable combination of turbulence models in different blocks led to the results with the same accuracy as the high order turbulence model for all of the blocks, in addition to the reductions in memory and CPU time consumption.

Keywords: Computer memory, CPU time, Multi-block method, Turbulence modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
941 Two Lessons Learnt in Defining Intersections and Interfaces in Numerical Modeling with Plaxis

Authors: Mahdi Sadeghian, Somaye Sadeghian, Reza Dinarvand

Abstract:

This paper is going to discuss two issues encountered in using PLAXIS. Both issues were monitored during application of PLAXIS to estimate the excavation-induced displacement. Column Soil Mixing (CSM) was applied to stabilise the excavation. It was understood that the estimated excavation induced deformation at the top of the CSM blocks highly depends on the material type defining pavement material adjacent to the CSM blocks. Cohesive material for pavement will result in the unrealistic connection between pavement and CSM even by defining an interface element. To find the most realistic approach, the interface defined in three different manners (1) no interface elements were applied (2) a non-cohesive soil layer was defined between pavement and CSM block to represent the friction between these materials (3) built-in interface elements in PLAXIS was used to define the boundary between the pavement and the CSM block. The result showed that the option 2 would result in more realistic results. The second issue was in the modelling of the contact line between the CSM block and an inclined layer underneath. The analysis result showed that the excavation-induced deformation highly depends on how the PLAXIS user defines the contact area. It was understood that if the contact area had defined as a point in which CSM block had intersected the layer underneath the estimated lateral displacement of CSM block would be unrealistically lower than the model in which the contact area was defined as a line.

Keywords: PLAXIS, FEM, CSM, excavation-induced deformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 583
940 Applications of Rough Set Decompositions in Information Retrieval

Authors: Chen Wu, Xiaohua Hu

Abstract:

This paper proposes rough set models with three different level knowledge granules in incomplete information system under tolerance relation by similarity between objects according to their attribute values. Through introducing dominance relation on the discourse to decompose similarity classes into three subclasses: little better subclass, little worse subclass and vague subclass, it dismantles lower and upper approximations into three components. By using these components, retrieving information to find naturally hierarchical expansions to queries and constructing answers to elaborative queries can be effective. It illustrates the approach in applying rough set models in the design of information retrieval system to access different granular expanded documents. The proposed method enhances rough set model application in the flexibility of expansions and elaborative queries in information retrieval.

Keywords: Incomplete information system, Rough set model, tolerance relation, dominance relation, approximation, decomposition, elaborative query.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
939 A Visualized Framework for Representing Uncertain and Incomplete Temporal Knowledge

Authors: Yue Wang, Jixin Ma, Brian Knight

Abstract:

This paper presents a visualized computer aided case tool for non-expert, called Visual Time, for representing and reasoning about incomplete and uncertain temporal information. It is both expressive and versatile, allowing logical conjunctions and disjunctions of both absolute and relative temporal relations, such as “Before”, “Meets”, “Overlaps”, “Starts”, “During”, and “Finishes”, etc. In terms of a visualized framework, Visual Time provides a user-friendly environment for describing scenarios with rich temporal structure in natural language, which can be formatted as structured temporal phrases and modeled in terms of Temporal Relationship Diagrams (TRD). A TRD can be automatically and visually transformed into a corresponding Time Graph, supported by automatic consistency checker that derives a verdict to confirm if a given scenario is temporally consistent or inconsistent.

Keywords: Time Visualization, Uncertainty, Incompleteness, Consistency Checking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1459
938 Balancing of Quad Tree using Point Pattern Analysis

Authors: Amitava Chakraborty, Sudip Kumar De, Ranjan Dasgupta

Abstract:

Point quad tree is considered as one of the most common data organizations to deal with spatial data & can be used to increase the efficiency for searching the point features. As the efficiency of the searching technique depends on the height of the tree, arbitrary insertion of the point features may make the tree unbalanced and lead to higher time of searching. This paper attempts to design an algorithm to make a nearly balanced quad tree. Point pattern analysis technique has been applied for this purpose which shows a significant enhancement of the performance and the results are also included in the paper for the sake of completeness.

Keywords: Algorithm, Height balanced tree, Point patternanalysis, Point quad tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2648
937 A Complexity-Based Approach in Image Compression using Neural Networks

Authors: Hadi Veisi, Mansour Jamzad

Abstract:

In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.

Keywords: Adaptive image compression, Image complexity, Multi-layer perceptron neural network, JPEG Standard, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2169
936 Balanced and Unbalanced Voltage Sag Mitigation Using DSTATCOM with Linear and Nonlinear Loads

Authors: H. Nasiraghdam, A. Jalilian

Abstract:

DSTATCOM is one of the equipments for voltage sag mitigation in power systems. In this paper a new control method for balanced and unbalanced voltage sag mitigation using DSTATCOM is proposed. The control system has two loops in order to regulate compensator current and load voltage. Delayed signal cancellation has been used for sequence separation. The compensator should protect sensitive loads against different types of voltage sag. Performance of the proposed method is investigated under different types of voltage sags for linear and nonlinear loads. Simulation results show appropriate operation of the proposed control system.

Keywords: Custom power, power quality, voltage sagmitigation, current vector control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2782
935 Effective Class of Discreet Programing Problems

Authors: Kaziyev G. Z., Nabiyeva G. S., Kalizhanova A.U.

Abstract:

We consider herein a concise view of discreet programming models and methods. There has been conducted the models and methods analysis. On the basis of discreet programming models there has been elaborated and offered a new class of problems, i.e. block-symmetry models and methods of applied tasks statements and solutions.

Keywords: Discreet programming, block-symmetry, analysis methods, information systems development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
934 Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools

Authors: Raymond K. Jonkers

Abstract:

The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.

Keywords: Outcome based management, performance management, lifecycle costs, balanced scorecard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
933 Fibers Presence Effects on Air Flow of Attenuator of Spun-Bond Production System

Authors: Nasser Ghassembaglou, Abdullah Bolek, Oktay Yilmaz, Ertan Oznergiz, Hikmet Kocabas, Safak Yilmaz

Abstract:

Different designs of attenuator systems have been studied in this research; new analysis have been done on existed designs considering fibers effect on air flow; it was comprehended that, at fibers presence, there is an air flow which agglomerates fibers as a negative effect. So some new representations have been designed and CFD analysis has been done on them. Afterwards, one of these representations selected as the most optimum and effective design which is brought in this paper.

Keywords: Attenuator, CFD, nanofiber, spun-bond.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
932 An Improved Variable Tolerance RSM with a Proportion Threshold

Authors: Chen Wu, Youquan Xu, Dandan Li, Ronghua Yang, Lijuan Wang

Abstract:

In rough set models, tolerance relation, similarity relation and limited tolerance relation solve different situation problems for incomplete information systems in which there exists a phenomenon of missing value. If two objects have the same few known attributes and more unknown attributes, they cannot distinguish them well. In order to solve this problem, we presented two improved limited and variable precision rough set models. One is symmetric, the other one is non-symmetric. They all use more stringent condition to separate two small probability equivalent objects into different classes. The two models are needed to engage further study in detail. In the present paper, we newly form object classes with a different respect comparing to the first suggested model. We overcome disadvantages of non-symmetry regarding to the second suggested model. We discuss relationships between or among several models and also make rule generation. The obtained results by applying the second model are more accurate and reasonable.

Keywords: Incomplete information system, rough set, symmetry, variable precision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842
931 Learning Block Memories with Metric Networks

Authors: Mario Gonzalez, David Dominguez, Francisco B. Rodriguez

Abstract:

An attractor neural network on the small-world topology is studied. A learning pattern is presented to the network, then a stimulus carrying local information is applied to the neurons and the retrieval of block-like structure is investigated. A synaptic noise decreases the memory capability. The change of stability from local to global attractors is shown to depend on the long-range character of the network connectivity.

Keywords: Hebbian learning, image recognition, small world, spatial information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
930 Modelling Sudoku Puzzles as Block-world Problems

Authors: Cecilia Nugraheni, Luciana Abednego

Abstract:

Sudoku is a kind of logic puzzles. Each puzzle consists of a board, which is a 9×9 cells, divided into nine 3×3 subblocks and a set of numbers from 1 to 9. The aim of this puzzle is to fill in every cell of the board with a number from 1 to 9 such that in every row, every column, and every subblock contains each number exactly one. Sudoku puzzles belong to combinatorial problem (NP complete). Sudoku puzzles can be solved by using a variety of techniques/algorithms such as genetic algorithms, heuristics, integer programming, and so on. In this paper, we propose a new approach for solving Sudoku which is by modelling them as block-world problems. In block-world problems, there are a number of boxes on the table with a particular order or arrangement. The objective of this problem is to change this arrangement into the targeted arrangement with the help of two types of robots. In this paper, we present three models for Sudoku. We modellized Sudoku as parameterized multi-agent systems. A parameterized multi-agent system is a multi-agent system which consists of several uniform/similar agents and the number of the agents in the system is stated as the parameter of this system. We use Temporal Logic of Actions (TLA) for formalizing our models.

Keywords: Sudoku puzzle, block world problem, parameterized multi agent systems modelling, Temporal Logic of Actions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2378
929 A Conservative Multi-block Algorithm for Two-dimensional Numerical Model

Authors: Yaoxin Zhang, Yafei Jia, Sam S.Y. Wang

Abstract:

A multi-block algorithm and its implementation in two-dimensional finite element numerical model CCHE2D are presented. In addition to a conventional Lagrangian Interpolation Method (LIM), a novel interpolation method, called Consistent Interpolation Method (CIM), is proposed for more accurate information transfer across the interfaces. The consistent interpolation solves the governing equations over the auxiliary elements constructed around the interpolation nodes using the same numerical scheme used for the internal computational nodes. With the CIM, the momentum conservation can be maintained as well as the mass conservation. An imbalance correction scheme is used to enforce the conservation laws (mass and momentum) across the interfaces. Comparisons of the LIM and the CIM are made using several flow simulation examples. It is shown that the proposed CIM is physically more accurate and produces satisfactory results efficiently.

Keywords: Multi-block algorithm, conservation, interpolation, numerical model, flow simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
928 Heterogeneous Attribute Reduction in Noisy System based on a Generalized Neighborhood Rough Sets Model

Authors: Siyuan Jing, Kun She

Abstract:

Neighborhood Rough Sets (NRS) has been proven to be an efficient tool for heterogeneous attribute reduction. However, most of researches are focused on dealing with complete and noiseless data. Factually, most of the information systems are noisy, namely, filled with incomplete data and inconsistent data. In this paper, we introduce a generalized neighborhood rough sets model, called VPTNRS, to deal with the problem of heterogeneous attribute reduction in noisy system. We generalize classical NRS model with tolerance neighborhood relation and the probabilistic theory. Furthermore, we use the neighborhood dependency to evaluate the significance of a subset of heterogeneous attributes and construct a forward greedy algorithm for attribute reduction based on it. Experimental results show that the model is efficient to deal with noisy data.

Keywords: attribute reduction, incomplete data, inconsistent data, tolerance neighborhood relation, rough sets

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1543
927 Investigation of Increasing the Heat Transfer from Flat Surfaces Using Boundary Layer Excitation

Authors: M.H.Ghaffari

Abstract:

The present study is concerned with effect of exciting boundary layer on increase in heat transfer from flat surfaces. As any increase in heat transfer between a fluid inside a face and another one outside of it can cause an increase in some equipment's efficiency, so at this present we have tried to increase the wall's heat transfer coefficient by exciting the fluid boundary layer. By a collision between flow and the placed block at the fluid way, the flow pattern and the boundary layer stability will change. The flow way inside the channel is simulated as a 2&3-dimensional channel by Gambit TM software. With studying the achieved results by this simulation for the flow way inside the channel with a block coordinating with Fluent TM software, it's determined that the figure and dimensions of the exciter are too important for exciting the boundary layer so that any increase in block dimensions in vertical side against the flow and any reduction in its dimensions at the flow side can increase the average heat transfer coefficient from flat surface and increase the flow pressure loss. Using 2&3-dimensional analysis on exciting the flow at the flow way inside a channel by cylindrical block at the same time with the external flow, we came to this conclusion that the heat flux transferred from the surface, is increased considerably in terms of the condition without excitation. Also, the k-e turbulence model is used.

Keywords: Cooling, Heat transfer, Turbulence, Excitingboundary layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1151
926 Real Time Object Tracking in H.264/ AVC Using Polar Vector Median and Block Coding Modes

Authors: T. Kusuma, K. Ashwini

Abstract:

This paper presents a real time video surveillance system which is capable of tracking multiple real time objects using Polar Vector Median (PVM) and Block Coding Modes (BCM) with Global Motion Compensation (GMC). This strategy works in the packed area and furthermore utilizes the movement vectors and BCM from the compressed bit stream to perform real time object tracking. We propose to do this in view of the neighboring Motion Vectors (MVs) using a method called PVM. Since GM adds to the object’s native motion, for accurate tracking, it is important to remove GM from the MV field prior to further processing. The proposed method is tested on a number of standard sequences and the results show its advantages over some of the current modern methods.

Keywords: Block coding mode, global motion compensation, object tracking, polar vector median, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
925 A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm

Authors: Anuj Abraham, N. Pappa, Daniel Honc, Rahul Sharma

Abstract:

In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA), is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results.

Keywords: Balanced truncation, Clustering, Dominant pole, Hankel norm, Model reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2628
924 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication

Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa

Abstract:

Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.

Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1468
923 The Application of an Experimental Design for the Defect Reduction of Electrodeposition Painting on Stainless Steel Washers

Authors: Chansiri Singhtaun, Nattaporn Prasartthong

Abstract:

The purpose of this research is to reduce the amount of incomplete coating of stainless steel washers in the electrodeposition painting process by using an experimental design technique. The surface preparation was found to be a major cause of painted surface quality. The influence of pretreating and painting process parameters, which are cleaning time, chemical concentration and shape of hanger were studied. A 23 factorial design with two replications was performed. The analysis of variance for the designed experiment showed the great influence of cleaning time and shape of hanger. From this study, optimized cleaning time was determined and a newly designed electrical conductive hanger was proved to be superior to the original one. The experimental verification results showed that the amount of incomplete coating defects decreased from 4% to 1.02% and operation cost decreased by 10.5%.

Keywords: Defect reduction, design of experiments, electrodeposition painting, stainless steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2217