Search results for: moment computation.
264 Column Size for R.C. Frames with High Drift
Authors: Sunil S. Mayengbam, S. Choudhury
Abstract:
A method to predict the column size for displacement based design of reinforced concrete frame buildings with higher target inter storey drift is reported here. The column depth derived from empirical relation as a function of given beam section, target inter-story drift, building plan features and common displacement based design parameters is used. Regarding the high drift requirement, a minimum column-beam moment capacity ratio is maintained during capacity design. The method is used in designing four, eight and twelve story frame buildings with displacement based design for three percent target inter storey drift. Non linear time history analysis of the designed buildings are performed under five artificial ground motions to show that the columns are found elastic enough to avoid column sway mechanism assuring that for the design the column size can be used with or without minor changes.
Keywords: Column size, point of contra flexure, displacement based design, capacity design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27316263 Underlying Cognitive Complexity Measure Computation with Combinatorial Rules
Authors: Benjapol Auprasert, Yachai Limpiyakorn
Abstract:
Measuring the complexity of software has been an insoluble problem in software engineering. Complexity measures can be used to predict critical information about testability, reliability, and maintainability of software systems from automatic analysis of the source code. During the past few years, many complexity measures have been invented based on the emerging Cognitive Informatics discipline. These software complexity measures, including cognitive functional size, lend themselves to the approach of the total cognitive weights of basic control structures such as loops and branches. This paper shows that the current existing calculation method can generate different results that are algebraically equivalence. However, analysis of the combinatorial meanings of this calculation method shows significant flaw of the measure, which also explains why it does not satisfy Weyuker's properties. Based on the findings, improvement directions, such as measures fusion, and cumulative variable counting scheme are suggested to enhance the effectiveness of cognitive complexity measures.Keywords: Cognitive Complexity Measure, Cognitive Weight of Basic Control Structure, Counting Rules, Cumulative Variable Counting Scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895262 Power System Voltage Control using LP and Artificial Neural Network
Authors: A. Sina, A. Aeenmehr, H. Mohamadian
Abstract:
Optimization and control of reactive power distribution in the power systems leads to the better operation of the reactive power resources. Reactive power control reduces considerably the power losses and effective loads and improves the power factor of the power systems. Another important reason of the reactive power control is improving the voltage profile of the power system. In this paper, voltage and reactive power control using Neural Network techniques have been applied to the 33 shines- Tehran Electric Company. In this suggested ANN, the voltages of PQ shines have been considered as the input of the ANN. Also, the generators voltages, tap transformers and shunt compensators have been considered as the output of ANN. Results of this techniques have been compared with the Linear Programming. Minimization of the transmission line power losses has been considered as the objective function of the linear programming technique. The comparison of the results of the ANN technique with the LP shows that the ANN technique improves the precision and reduces the computation time. ANN technique also has a simple structure and this causes to use the operator experience.Keywords: voltage control, linear programming, artificial neural network, power systems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762261 Efficient System for Speech Recognition using General Regression Neural Network
Authors: Abderrahmane Amrouche, Jean Michel Rouvaen
Abstract:
In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.Keywords: Speech Recognition, General Regression NeuralNetwork, Hidden Markov Model, Recurrent Neural Network, ArabicDigits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187260 A New Design Partially Blind Signature Scheme Based on Two Hard Mathematical Problems
Authors: Nedal Tahat
Abstract:
Recently, many existing partially blind signature scheme based on a single hard problem such as factoring, discrete logarithm, residuosity or elliptic curve discrete logarithm problems. However sooner or later these systems will become broken and vulnerable, if the factoring or discrete logarithms problems are cracked. This paper proposes a secured partially blind signature scheme based on factoring (FAC) problem and elliptic curve discrete logarithms (ECDL) problem. As the proposed scheme is focused on factoring and ECDLP hard problems, it has a solid structure and will totally leave the intruder bemused because it is very unlikely to solve the two hard problems simultaneously. In order to assess the security level of the proposed scheme a performance analysis has been conducted. Results have proved that the proposed scheme effectively deals with the partial blindness, randomization, unlinkability and unforgeability properties. Apart from this we have also investigated the computation cost of the proposed scheme. The new proposed scheme is robust and it is difficult for the malevolent attacks to break our scheme.
Keywords: Cryptography, Partially Blind Signature, Factoring, Elliptic Curve Discrete Logarithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772259 The Study of Magnetic and Transport Properties in Normal State Eu1.85+yCe0.15-yCu1-yFeyO4+α-δ
Authors: Risdiana, D. Suhendar, S. Pratiwi, W. A. Somantri, T. Saragi
Abstract:
The effect of partially substitution of magnetic impurity Fe for Cu to the magnetic and transport properties in electron-doped superconducting cuprates of Eu1.85+yCe0.15-yCu1-yFeyO4+α-δ (ECCFO) with y = 0, 0.010, 0.020, and 0.050 has been studied, in order to investigate the mechanism of magnetic and transport properties of ECCFO in normal-state. Magnetic properties are investigated by DC magnetic-susceptibility measurements that carried out at low temperatures down to 2 K using a standard SQUID magnetometer in a magnetic field of 5 Oe on field cooling. Transport properties addressed to electron mobility, are extracted from radius of electron localization calculated from temperature dependence of resistivity. For y = 0, temperature dependence of dc magnetic-susceptibility (χ) indicated the change of magnetic behavior from paramagnetic to diamagnetic below 15 K. Above 15 K, all samples show paramagnetic behavior with the values of magnetic moment in every volume unit increased with increasing y. Electron mobility decreased with increasing y.Keywords: DC magnetic-susceptibility, electron mobility, Eu1.85+yCe0.15-yCu1-yFeyO4+α-δ, normal state.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3950258 Posture Recognition using Combined Statistical and Geometrical Feature Vectors based on SVM
Authors: Omer Rashid, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis
Abstract:
It is hard to percept the interaction process with machines when visual information is not available. In this paper, we have addressed this issue to provide interaction through visual techniques. Posture recognition is done for American Sign Language to recognize static alphabets and numbers. 3D information is exploited to obtain segmentation of hands and face using normal Gaussian distribution and depth information. Features for posture recognition are computed using statistical and geometrical properties which are translation, rotation and scale invariant. Hu-Moment as statistical features and; circularity and rectangularity as geometrical features are incorporated to build the feature vectors. These feature vectors are used to train SVM for classification that recognizes static alphabets and numbers. For the alphabets, curvature analysis is carried out to reduce the misclassifications. The experimental results show that proposed system recognizes posture symbols by achieving recognition rate of 98.65% and 98.6% for ASL alphabets and numbers respectively.Keywords: Feature Extraction, Posture Recognition, Pattern Recognition, Application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521257 Traction Behavior of Linear Piezo-Viscous Lubricants in Rough Elastohydrodynamic Lubrication Contacts
Authors: Punit Kumar, Niraj Kumar
Abstract:
The traction behavior of lubricants with the linear pressure-viscosity response in EHL line contacts is investigated numerically for smooth as well as rough surfaces. The analysis involves the simultaneous solution of Reynolds, elasticity and energy equations along with the computation of lubricant properties and surface temperatures. The temperature modified Doolittle-Tait equations are used to calculate viscosity and density as functions of fluid pressure and temperature, while Carreau model is used to describe the lubricant rheology. The surface roughness is assumed to be sinusoidal and it is present on the nearly stationary surface in near-pure sliding EHL conjunction. The linear P-V oil is found to yield much lower traction coefficients and slightly thicker EHL films as compared to the synthetic oil for a given set of dimensionless speed and load parameters. Besides, the increase in traction coefficient attributed to surface roughness is much lower for the former case. The present analysis emphasizes the importance of employing realistic pressure-viscosity response for accurate prediction of EHL traction.Keywords: EHL, linear pressure-viscosity, surface roughness, traction, water/glycol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1259256 Design of an Efficient Retimed CIC Compensation Filter
Authors: Vishal Awasthi, Krishna Raj
Abstract:
Unwanted side effects because of spectral aliasing and spectral imaging during signal processing would be the major concern over the sampling rate alteration. Multirate-multistage implementation of digital filter could come about a large computational saving than single rate filter suitable for sample rate conversion. This implementation can further improve through high-level architectural transformation in circuit level. Reallocating registers and relocating flip-flops across logic gates through retiming certainly a prominent sequential transformation technology, that optimize hardware circuits to achieve faster clocking speed without affecting the functionality. In this paper, we proposed an efficient compensated cascade Integrator comb (CIC) decimation filter structure that analyze the consequence of filter order variation which has a retimed FIR filter being compensator while using the cutset retiming technique and achieved an improvement in the passband droop by 14% to 39%, in computation time by 38.04%, 25.78%, 12.21%, 6.69% and 4.44% and reduction in path delay by 62.27%, 72%, 86.63%, 91.56% and 94.42% of 3, 6, 8, 12 and 24 order filter respectively than the non-retimed CIC compensation filter.
Keywords: Multirate Filtering, CIC decimation filter, Compensation theory, Retiming, Retiming algorithm, Filter order, Synchronous dataflow graph.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3706255 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016
Authors: Dimitra Alexiou
Abstract:
During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.
Keywords: Tourism, statistical methods, exponential smoothing, land spatial planning, economy, Microsoft Excel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709254 Study the Effect of Roughness on the Higher Order Moment to Extract Information about the Turbulent Flow Structure in an Open Channel Flow
Authors: Md Abdullah Al Faruque, Ram Balachandar
Abstract:
The present study was carried out to understand the extent of effect of roughness and Reynolds number in open channel flow (OCF). To this extent, four different types of bed surface conditions consisting smooth, distributed roughness, continuous roughness, natural sand bed and two different Reynolds number for each bed surfaces were adopted in this study. Particular attention was given on mean velocity, turbulence intensity, Reynolds shear stress, correlation, higher order moments and quadrant analysis. Further, the extent of influence of roughness and Reynolds number in the depth-wise direction also studied. Increasing Reynolds shear stress near rough beds are noticed due to arrays of discrete roughness elements and flow over these elements generating a series of wakes which contributes to the generation of significantly higher Reynolds shear stress.
Keywords: Bed roughness, ejection, sweep, open channel flow, Reynolds Shear Stress, turbulent boundary layer, velocity triple product.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733253 A Proxy Multi-Signature Scheme with Anonymous Vetoable Delegation
Authors: Pei-yih Ting, Dream-Ming Huang, Xiao-Wei Huang
Abstract:
Frequently a group of people jointly decide and authorize a specific person as a representative in some business/poitical occasions, e.g., the board of a company authorizes the chief executive officer to close a multi-billion acquisition deal. In this paper, an integrated proxy multi-signature scheme that allows anonymously vetoable delegation is proposed. This protocol integrates mechanisms of private veto, distributed proxy key generation, secure transmission of proxy key, and existentially unforgeable proxy multi-signature scheme. First, a provably secure Guillou-Quisquater proxy signature scheme is presented, then the “zero-sharing" protocol is extended over a composite modulus multiplicative group, and finally the above two are combined to realize the GQ proxy multi-signature with anonymously vetoable delegation. As a proxy signature scheme, this protocol protects both the original signers and the proxy signer. The modular design allows simplified implementation with less communication overheads and better computation performance than a general secure multi-party protocol.Keywords: GQ proxy signature, proxy multi-signature, zero-sharing protocol, secure multi-party protocol, private veto protocol
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544252 Variation of Spot Price and Profits of Andhra Pradesh State Grid in Deregulated Environment
Authors: Chava Sunil Kumar, P.S. Subrahmanyan, J. Amarnath
Abstract:
In this paper variation of spot price and total profits of the generating companies- through wholesale electricity trading are discussed with and without Central Generating Stations (CGS) share and seasonal variations are also considered. It demonstrates how proper analysis of generators- efficiencies and capabilities, types of generators owned, fuel costs, transmission losses and settling price variation using the solutions of Optimal Power Flow (OPF), can allow companies to maximize overall revenue. It illustrates how solutions of OPF can be used to maximize companies- revenue under different scenarios. And is also extended to computation of Available Transfer Capability (ATC) is very important to the transmission system security and market forecasting. From these results it is observed that how crucial it is for companies to plan their daily operations and is certainly useful in an online environment of deregulated power system. In this paper above tasks are demonstrated on 124 bus real-life Indian utility power system of Andhra Pradesh State Grid and results have been presented and analyzed.Keywords: OPF, ATC, Electricity Market, Bid, Spot Price
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815251 Investigation of Electromagnetic Force in 3P5W Busbar System under Peak Short-Circuit Current
Authors: Farhana Mohamad Yusop, Syafrudin Masri, Dahaman Ishak, Mohamad Kamarol
Abstract:
Electromagnetic forces on three-phase five-wire (3P5W) busbar system is investigated under three-phase short-circuits current. The conductor busbar placed in compact galvanized steel enclosure is in the rectangular shape. Transient analysis from Opera-2D is carried out to develop the model of three-phase short-circuits current in the system. The result of the simulation is compared with the calculation result, which is obtained by applying the theories of Biot Savart’s law and Laplace equation. Under this analytical approach, the moment of peak short-circuit current is taken into account. The effect upon geometrical arrangement of the conductor and the present of the steel enclosure are considered by the theory of image. The result depict that the electromagnetic force due to the transient short-circuit from simulation is agreed with the calculation.
Keywords: Busbar, electromagnetic force, short-circuit current, transient analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3997250 Extended Well-Founded Semantics in Bilattices
Authors: Daniel Stamate
Abstract:
One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.Keywords: Logic programs, imperfect information, multivalued logics, bilattices, assumptions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267249 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.
Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324248 Developing a Multiagent Based Decision Support System for Realtime Multi-Risk Disaster Management
Authors: D. Moser, D. Pinto, A. Cipriano
Abstract:
A Disaster Management System (DMS) is very important for countries with multiple disasters, such as Chile. In the world (also in Chile)different disasters (earthquakes, tsunamis, volcanic eruption, fire or other natural or man-made disasters) happen and have an effect on the population. It is also possible that two or more disasters occur at the same time. This meansthata multi-risk situation must be mastered. To handle such a situation a Decision Support System (DSS) based on multiagents is a suitable architecture. The most known DMSs are concernedwith only a singledisaster (sometimes thecombination of earthquake and tsunami) and often with a particular disaster. Nevertheless, a DSS helps for a better real-time response. Analyze the existing systems in the literature and expand them for multi-risk disasters to construct a well-organized system is the proposal of our work. The here shown work is an approach of a multi-risk system, which needs an architecture and well defined aims. In this moment our study is a kind of case study to analyze the way we have to follow to create our proposed system in the future.
Keywords: Decision Support System, Disaster Management System, Multi-Risk, Multiagent System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2621247 Low Complexity Multi Mode Interleaver Core for WiMAX with Support for Convolutional Interleaving
Authors: Rizwan Asghar, Dake Liu
Abstract:
A hardware efficient, multi mode, re-configurable architecture of interleaver/de-interleaver for multiple standards, like DVB, WiMAX and WLAN is presented. The interleavers consume a large part of silicon area when implemented by using conventional methods as they use memories to store permutation patterns. In addition, different types of interleavers in different standards cannot share the hardware due to different construction methodologies. The novelty of the work presented in this paper is threefold: 1) Mapping of vital types of interleavers including convolutional interleaver onto a single architecture with flexibility to change interleaver size; 2) Hardware complexity for channel interleaving in WiMAX is reduced by using 2-D realization of the interleaver functions; and 3) Silicon cost overheads reduced by avoiding the use of small memories. The proposed architecture consumes 0.18mm2 silicon area for 0.12μm process and can operate at a frequency of 140 MHz. The reduced complexity helps in minimizing the memory utilization, and at the same time provides strong support to on-the-fly computation of permutation patterns.Keywords: Hardware interleaver implementation, WiMAX, DVB, block interleaver, convolutional interleaver, hardwaremultiplexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038246 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1236245 Method of Moments for Analysis of Multiple Crack Interaction in an Isotropic Elastic Solid
Authors: Weifeng Wang, Xianwei Zeng, Jianping Ding
Abstract:
The problem of N cracks interaction in an isotropic elastic solid is decomposed into a subproblem of a homogeneous solid without crack and N subproblems with each having a single crack subjected to unknown tractions on the two crack faces. The unknown tractions, namely pseudo tractions on each crack are expanded into polynomials with unknown coefficients, which have to be determined by the consistency condition, i.e. by the equivalence of the original multiple cracks interaction problem and the superposition of the N+1 subproblems. In this paper, Kachanov-s approach of average tractions is extended into the method of moments to approximately impose the consistence condition. Hence Kachanov-s method can be viewed as the zero-order method of moments. Numerical results of the stress intensity factors are presented for interactions of two collinear cracks, three collinear cracks, two parallel cracks, and three parallel cracks. As the order of moment increases, the accuracy of the method of moments improves.Keywords: Crack interaction, stress intensity factor, multiplecracks, method of moments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584244 Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet
Authors: Talbi Mourad, Salhi Lotfi, Chérif Adnen
Abstract:
In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.
Keywords: Enhancement, spectral subtraction, SNR, discrete wavelet packet transform, spectral entropy Histogram
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1992243 Optimal Combination for Modal Pushover Analysis by Using Genetic Algorithm
Authors: K. Shakeri, M. Mohebbi
Abstract:
In order to consider the effects of the higher modes in the pushover analysis, during the recent years several multi-modal pushover procedures have been presented. In these methods the response of the considered modes are combined by the square-rootof- sum-of-squares (SRSS) rule while application of the elastic modal combination rules in the inelastic phases is no longer valid. In this research the feasibility of defining an efficient alternative combination method is investigated. Two steel moment-frame buildings denoted SAC-9 and SAC-20 under ten earthquake records are considered. The nonlinear responses of the structures are estimated by the directed algebraic combination of the weighted responses of the separate modes. The weight of the each mode is defined so that the resulted response of the combination has a minimum error to the nonlinear time history analysis. The genetic algorithm (GA) is used to minimize the error and optimize the weight factors. The obtained optimal factors for each mode in different cases are compared together to find unique appropriate weight factors for each mode in all cases.Keywords: Genetic Algorithm, Modal Pushover, Optimalweight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805242 Adaptive Gait Pattern Generation of Biped Robot based on Human's Gait Pattern Analysis
Authors: Seungsuk Ha, Youngjoon Han, Hernsoo Hahn
Abstract:
This paper proposes a method of adaptively generating a gait pattern of biped robot. The gait synthesis is based on human's gait pattern analysis. The proposed method can easily be applied to generate the natural and stable gait pattern of any biped robot. To analyze the human's gait pattern, sequential images of the human's gait on the sagittal plane are acquired from which the gait control values are extracted. The gait pattern of biped robot on the sagittal plane is adaptively generated by a genetic algorithm using the human's gait control values. However, gait trajectories of the biped robot on the sagittal plane are not enough to construct the complete gait pattern because the biped robot moves on 3-dimension space. Therefore, the gait pattern on the frontal plane, generated from Zero Moment Point (ZMP), is added to the gait one acquired on the sagittal plane. Consequently, the natural and stable walking pattern for the biped robot is obtained.
Keywords: Biped robot, gait pattern, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2272241 Formation of Chemical Compound Layer at the Interface of Initial Substances A and B with Dominance of Diffusion of the A Atoms
Authors: Pavlo Selyshchev, Samuel Akintunde
Abstract:
A theoretical approach to consider formation of chemical compound layer at the interface between initial substances A and B due to the interfacial interaction and diffusion is developed. It is considered situation when speed of interfacial interaction is large enough and diffusion of A-atoms through AB-layer is much more then diffusion of B-atoms. Atoms from A-layer diffuse toward B-atoms and form AB-atoms on the surface of B-layer. B-atoms are assumed to be immobile. The growth kinetics of the AB-layer is described by two differential equations with non-linear coupling, producing a good fit to the experimental data. It is shown that growth of the thickness of the AB-layer determines by dependence of chemical reaction rate on reactants concentration. In special case the thickness of the AB-layer can grow linearly or parabolically depending on that which of processes (interaction or the diffusion) controls the growth. The thickness of AB-layer as function of time is obtained. The moment of time (transition point) at which the linear growth are changed by parabolic is found.
Keywords: Phase formation, Binary systems, Interfacial Reaction, Diffusion, Compound layers, Growth kinetics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761240 The Change in Management Accounting from an Institutional and Contingency Perspective: A Case Study for a Romanian Company
Authors: Gabriel Jinga, Madalina Dumitru
Abstract:
The objective of this paper is to present the process of change in management accounting in Romania, a former communist country from Eastern Europe. In order to explain this process, we used the contingency and institutional theories. We focused on the following directions: the presentation of the scientific context and motivation of this research and the case study. We presented the state of the art in the process of change in the management accounting from the international and national perspective. We also described the evolution of management accounting in Romania in the context of economic and political changes. An important moment was the fall of communism in 1989. This represents a starting point for a new economic environment and for new management accounting. Accordingly, we developed a case study which presented this evolution. The conclusion of our research was that the changes in the management accounting system of the company analysed occurred in the same time with the institutionalisation of some elements (e.g. degree of competition, training and competencies in management accounting). The management accounting system was modelled by the contingencies specific to this company (e.g. environment, industry, strategy).Keywords: Management accounting, change, Romania, contingency and institutional theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2334239 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication
Authors: M. Hamad Hassan, S.A.M. Gilani
Abstract:
In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.
Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565238 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks
Authors: Siliang Wang, Minghui Wang, Jun Hu
Abstract:
A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.Keywords: pruning method, stochastic, time-varying networks, optimal path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854237 Data Hiding by Vector Quantization in Color Image
Authors: Yung-Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: Data hiding, vector quantization, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777236 Faster FPGA Routing Solution using DNA Computing
Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon
Abstract:
There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.Keywords: FPGA, Routing, DNA Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593235 Modeling and Simulation of Two-Phase Interleaved Boost Converter Using Open-Source Software Scilab/Xcos
Authors: Yin Yin Phyo, Tun Lin Naing
Abstract:
This paper investigated the simulation of two-phase interleaved boost converter (IBC) with free and open-source software Scilab/Xcos. By using interleaved method, it can reduce current stress on components, components size, input current ripple and output voltage ripple. The required mathematical model is obtained from the equivalent circuit of its different four modes of operation for simulation. The equivalent circuits are considered in continuous conduction mode (CCM). The average values of the system variables are derived from the state-space equation to find the equilibrium point. Scilab is now becoming more and more popular among students, engineers and scientists because it is open-source software and free of charge. It gives a great convenience because it has powerful computation and simulation function. The waveforms of output voltage, input current and inductors current are obtained by using Scilab/Xcos.
Keywords: Two-phase boost converter, continuous conduction mode, free and open-source, interleaved method, dynamic simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947