Search results for: moment computation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 741

Search results for: moment computation.

261 Low Cost Microcontroller Based ECG Machine

Authors: Muhibul H. Bhuyan, Md. T. Hasan, Hasan Iskander

Abstract:

Electrocardiographic (ECG) machine is an important equipment to diagnose heart problems. Besides, the ECG signals are used to detect many other features of human body and behavior. But it is not so cheap and simple in operation to be used in the countries like Bangladesh, where most of the people are very low income earners. Therefore, in this paper, we have tried to implement a simple and portable ECG machine. Since Arduino Uno microcontroller is very cheap, we have used it in our system to minimize the cost. Our designed system is powered by a 2-voltage level DC power supply. It provides wireless connectivity to have ECG data either in smartphone having android operating system or a PC/laptop having Windows operating system. To get the data, a graphic user interface has been designed. Android application has also been made using IDE for Android 2.3 and API 10. Since it requires no USB host API, almost 98% Android smartphones, available in the country, will be able to use it. We have calculated the heart rate from the measured ECG by our designed machine and by an ECG machine of a reputed diagnostic center in Dhaka city for the same people at the same time on same day. Then we calculated the percentage of errors between the readings of two machines and computed its average. From this computation, we have found out that the average percentage of error is within an acceptable limit.

Keywords: Low cost ECG machine, heart diseases, remote monitoring, Arduino microcontroller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
260 Integration GIS–SCADA Power Systems to Enclosure Air Dispersion Model

Authors: Ibrahim Shaker, Amr El Hossany, Moustafa Osman, Mohamed El Raey

Abstract:

This paper will explore integration model between GIS–SCADA system and enclosure quantification model to approach the impact of failure-safe event. There are real demands to identify spatial objects and improve control system performance. Nevertheless, the employed methodology is predicting electro-mechanic operations and corresponding time to environmental incident variations. Open processing, as object systems technology, is presented for integration enclosure database with minimal memory size and computation time via connectivity drivers such as ODBC:JDBC during main stages of GIS–SCADA connection. The function of Geographic Information System is manipulating power distribution in contrast to developing issues. In other ward, GIS-SCADA systems integration will require numerical objects of process to enable system model calibration and estimation demands, determine of past events for analysis and prediction of emergency situations for response training.

Keywords: Air dispersion model, integration power system, SCADA systems, GIS system, environmental management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
259 Study the Effect of Roughness on the Higher Order Moment to Extract Information about the Turbulent Flow Structure in an Open Channel Flow

Authors: Md Abdullah Al Faruque, Ram Balachandar

Abstract:

The present study was carried out to understand the extent of effect of roughness and Reynolds number in open channel flow (OCF). To this extent, four different types of bed surface conditions consisting smooth, distributed roughness, continuous roughness, natural sand bed and two different Reynolds number for each bed surfaces were adopted in this study. Particular attention was given on mean velocity, turbulence intensity, Reynolds shear stress, correlation, higher order moments and quadrant analysis. Further, the extent of influence of roughness and Reynolds number in the depth-wise direction also studied. Increasing Reynolds shear stress near rough beds are noticed due to arrays of discrete roughness elements and flow over these elements generating a series of wakes which contributes to the generation of significantly higher Reynolds shear stress.

Keywords: Bed roughness, ejection, sweep, open channel flow, Reynolds Shear Stress, turbulent boundary layer, velocity triple product.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
258 Investigation of Electromagnetic Force in 3P5W Busbar System under Peak Short-Circuit Current

Authors: Farhana Mohamad Yusop, Syafrudin Masri, Dahaman Ishak, Mohamad Kamarol

Abstract:

Electromagnetic forces on three-phase five-wire (3P5W) busbar system is investigated under three-phase short-circuits current. The conductor busbar placed in compact galvanized steel enclosure is in the rectangular shape. Transient analysis from Opera-2D is carried out to develop the model of three-phase short-circuits current in the system. The result of the simulation is compared with the calculation result, which is obtained by applying the theories of Biot Savart’s law and Laplace equation. Under this analytical approach, the moment of peak short-circuit current is taken into account. The effect upon geometrical arrangement of the conductor and the present of the steel enclosure are considered by the theory of image. The result depict that the electromagnetic force due to the transient short-circuit from simulation is agreed with the calculation.

Keywords: Busbar, electromagnetic force, short-circuit current, transient analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3953
257 Developing a Multiagent Based Decision Support System for Realtime Multi-Risk Disaster Management

Authors: D. Moser, D. Pinto, A. Cipriano

Abstract:

A Disaster Management System (DMS) is very important for countries with multiple disasters, such as Chile. In the world (also in Chile)different disasters (earthquakes, tsunamis, volcanic eruption, fire or other natural or man-made disasters) happen and have an effect on the population. It is also possible that two or more disasters occur at the same time. This meansthata multi-risk situation must be mastered. To handle such a situation a Decision Support System (DSS) based on multiagents is a suitable architecture. The most known DMSs are concernedwith only a singledisaster (sometimes thecombination of earthquake and tsunami) and often with a particular disaster. Nevertheless, a DSS helps for a better real-time response. Analyze the existing systems in the literature and expand them for multi-risk disasters to construct a well-organized system is the proposal of our work. The here shown work is an approach of a multi-risk system, which needs an architecture and well defined aims. In this moment our study is a kind of case study to analyze the way we have to follow to create our proposed system in the future.

Keywords: Decision Support System, Disaster Management System, Multi-Risk, Multiagent System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573
256 Underlying Cognitive Complexity Measure Computation with Combinatorial Rules

Authors: Benjapol Auprasert, Yachai Limpiyakorn

Abstract:

Measuring the complexity of software has been an insoluble problem in software engineering. Complexity measures can be used to predict critical information about testability, reliability, and maintainability of software systems from automatic analysis of the source code. During the past few years, many complexity measures have been invented based on the emerging Cognitive Informatics discipline. These software complexity measures, including cognitive functional size, lend themselves to the approach of the total cognitive weights of basic control structures such as loops and branches. This paper shows that the current existing calculation method can generate different results that are algebraically equivalence. However, analysis of the combinatorial meanings of this calculation method shows significant flaw of the measure, which also explains why it does not satisfy Weyuker's properties. Based on the findings, improvement directions, such as measures fusion, and cumulative variable counting scheme are suggested to enhance the effectiveness of cognitive complexity measures.

Keywords: Cognitive Complexity Measure, Cognitive Weight of Basic Control Structure, Counting Rules, Cumulative Variable Counting Scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
255 Power System Voltage Control using LP and Artificial Neural Network

Authors: A. Sina, A. Aeenmehr, H. Mohamadian

Abstract:

Optimization and control of reactive power distribution in the power systems leads to the better operation of the reactive power resources. Reactive power control reduces considerably the power losses and effective loads and improves the power factor of the power systems. Another important reason of the reactive power control is improving the voltage profile of the power system. In this paper, voltage and reactive power control using Neural Network techniques have been applied to the 33 shines- Tehran Electric Company. In this suggested ANN, the voltages of PQ shines have been considered as the input of the ANN. Also, the generators voltages, tap transformers and shunt compensators have been considered as the output of ANN. Results of this techniques have been compared with the Linear Programming. Minimization of the transmission line power losses has been considered as the objective function of the linear programming technique. The comparison of the results of the ANN technique with the LP shows that the ANN technique improves the precision and reduces the computation time. ANN technique also has a simple structure and this causes to use the operator experience.

Keywords: voltage control, linear programming, artificial neural network, power systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
254 Efficient System for Speech Recognition using General Regression Neural Network

Authors: Abderrahmane Amrouche, Jean Michel Rouvaen

Abstract:

In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.

Keywords: Speech Recognition, General Regression NeuralNetwork, Hidden Markov Model, Recurrent Neural Network, ArabicDigits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
253 A New Design Partially Blind Signature Scheme Based on Two Hard Mathematical Problems

Authors: Nedal Tahat

Abstract:

Recently, many existing partially blind signature scheme based on a single hard problem such as factoring, discrete logarithm, residuosity or elliptic curve discrete logarithm problems. However sooner or later these systems will become broken and vulnerable, if the factoring or discrete logarithms problems are cracked. This paper proposes a secured partially blind signature scheme based on factoring (FAC) problem and elliptic curve discrete logarithms (ECDL) problem. As the proposed scheme is focused on factoring and ECDLP hard problems, it has a solid structure and will totally leave the intruder bemused because it is very unlikely to solve the two hard problems simultaneously. In order to assess the security level of the proposed scheme a performance analysis has been conducted. Results have proved that the proposed scheme effectively deals with the partial blindness, randomization, unlinkability and unforgeability properties. Apart from this we have also investigated the computation cost of the proposed scheme. The new proposed scheme is robust and it is difficult for the malevolent attacks to break our scheme.

Keywords: Cryptography, Partially Blind Signature, Factoring, Elliptic Curve Discrete Logarithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
252 Method of Moments for Analysis of Multiple Crack Interaction in an Isotropic Elastic Solid

Authors: Weifeng Wang, Xianwei Zeng, Jianping Ding

Abstract:

The problem of N cracks interaction in an isotropic elastic solid is decomposed into a subproblem of a homogeneous solid without crack and N subproblems with each having a single crack subjected to unknown tractions on the two crack faces. The unknown tractions, namely pseudo tractions on each crack are expanded into polynomials with unknown coefficients, which have to be determined by the consistency condition, i.e. by the equivalence of the original multiple cracks interaction problem and the superposition of the N+1 subproblems. In this paper, Kachanov-s approach of average tractions is extended into the method of moments to approximately impose the consistence condition. Hence Kachanov-s method can be viewed as the zero-order method of moments. Numerical results of the stress intensity factors are presented for interactions of two collinear cracks, three collinear cracks, two parallel cracks, and three parallel cracks. As the order of moment increases, the accuracy of the method of moments improves.

Keywords: Crack interaction, stress intensity factor, multiplecracks, method of moments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
251 Optimal Combination for Modal Pushover Analysis by Using Genetic Algorithm

Authors: K. Shakeri, M. Mohebbi

Abstract:

In order to consider the effects of the higher modes in the pushover analysis, during the recent years several multi-modal pushover procedures have been presented. In these methods the response of the considered modes are combined by the square-rootof- sum-of-squares (SRSS) rule while application of the elastic modal combination rules in the inelastic phases is no longer valid. In this research the feasibility of defining an efficient alternative combination method is investigated. Two steel moment-frame buildings denoted SAC-9 and SAC-20 under ten earthquake records are considered. The nonlinear responses of the structures are estimated by the directed algebraic combination of the weighted responses of the separate modes. The weight of the each mode is defined so that the resulted response of the combination has a minimum error to the nonlinear time history analysis. The genetic algorithm (GA) is used to minimize the error and optimize the weight factors. The obtained optimal factors for each mode in different cases are compared together to find unique appropriate weight factors for each mode in all cases.

Keywords: Genetic Algorithm, Modal Pushover, Optimalweight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
250 Adaptive Gait Pattern Generation of Biped Robot based on Human's Gait Pattern Analysis

Authors: Seungsuk Ha, Youngjoon Han, Hernsoo Hahn

Abstract:

This paper proposes a method of adaptively generating a gait pattern of biped robot. The gait synthesis is based on human's gait pattern analysis. The proposed method can easily be applied to generate the natural and stable gait pattern of any biped robot. To analyze the human's gait pattern, sequential images of the human's gait on the sagittal plane are acquired from which the gait control values are extracted. The gait pattern of biped robot on the sagittal plane is adaptively generated by a genetic algorithm using the human's gait control values. However, gait trajectories of the biped robot on the sagittal plane are not enough to construct the complete gait pattern because the biped robot moves on 3-dimension space. Therefore, the gait pattern on the frontal plane, generated from Zero Moment Point (ZMP), is added to the gait one acquired on the sagittal plane. Consequently, the natural and stable walking pattern for the biped robot is obtained.

Keywords: Biped robot, gait pattern, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243
249 Traction Behavior of Linear Piezo-Viscous Lubricants in Rough Elastohydrodynamic Lubrication Contacts

Authors: Punit Kumar, Niraj Kumar

Abstract:

The traction behavior of lubricants with the linear pressure-viscosity response in EHL line contacts is investigated numerically for smooth as well as rough surfaces. The analysis involves the simultaneous solution of Reynolds, elasticity and energy equations along with the computation of lubricant properties and surface temperatures. The temperature modified Doolittle-Tait equations are used to calculate viscosity and density as functions of fluid pressure and temperature, while Carreau model is used to describe the lubricant rheology. The surface roughness is assumed to be sinusoidal and it is present on the nearly stationary surface in near-pure sliding EHL conjunction. The linear P-V oil is found to yield much lower traction coefficients and slightly thicker EHL films as compared to the synthetic oil for a given set of dimensionless speed and load parameters. Besides, the increase in traction coefficient attributed to surface roughness is much lower for the former case. The present analysis emphasizes the importance of employing realistic pressure-viscosity response for accurate prediction of EHL traction.

Keywords: EHL, linear pressure-viscosity, surface roughness, traction, water/glycol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1234
248 Design of an Efficient Retimed CIC Compensation Filter

Authors: Vishal Awasthi, Krishna Raj

Abstract:

Unwanted side effects because of spectral aliasing and spectral imaging during signal processing would be the major concern over the sampling rate alteration. Multirate-multistage implementation of digital filter could come about a large computational saving than single rate filter suitable for sample rate conversion. This implementation can further improve through high-level architectural transformation in circuit level. Reallocating registers and  relocating flip-flops across logic gates through retiming certainly a prominent sequential transformation technology, that optimize hardware circuits to achieve faster clocking speed without affecting the functionality. In this paper, we proposed an efficient compensated cascade Integrator comb (CIC) decimation filter structure that analyze the consequence of filter order variation which has a retimed FIR filter being compensator while using the cutset retiming technique and achieved an improvement in the passband droop by 14% to 39%, in computation time by 38.04%, 25.78%, 12.21%, 6.69% and 4.44% and reduction in path delay by 62.27%, 72%, 86.63%, 91.56% and 94.42% of 3, 6, 8, 12 and 24 order filter respectively than the non-retimed CIC compensation filter.

Keywords: Multirate Filtering, CIC decimation filter, Compensation theory, Retiming, Retiming algorithm, Filter order, Synchronous dataflow graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3675
247 Formation of Chemical Compound Layer at the Interface of Initial Substances A and B with Dominance of Diffusion of the A Atoms

Authors: Pavlo Selyshchev, Samuel Akintunde

Abstract:

A theoretical approach to consider formation of chemical compound layer at the interface between initial substances A and B due to the interfacial interaction and diffusion is developed. It is considered situation when speed of interfacial interaction is large enough and diffusion of A-atoms through AB-layer is much more then diffusion of B-atoms. Atoms from A-layer diffuse toward B-atoms and form AB-atoms on the surface of B-layer. B-atoms are assumed to be immobile. The growth kinetics of the AB-layer is described by two differential equations with non-linear coupling, producing a good fit to the experimental data. It is shown that growth of the thickness of the AB-layer determines by dependence of chemical reaction rate on reactants concentration. In special case the thickness of the AB-layer can grow linearly or parabolically depending on that which of processes (interaction or the diffusion) controls the growth. The thickness of AB-layer as function of time is obtained. The moment of time (transition point) at which the linear growth are changed by parabolic is found.

Keywords: Phase formation, Binary systems, Interfacial Reaction, Diffusion, Compound layers, Growth kinetics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
246 Statistical and Land Planning Study of Tourist Arrivals in Greece during 2005-2016

Authors: Dimitra Alexiou

Abstract:

During the last 10 years, in spite of the economic crisis, the number of tourists arriving in Greece has increased, particularly during the tourist season from April to October. In this paper, the number of annual tourist arrivals is studied to explore their preferences with regard to the month of travel, the selected destinations, as well the amount of money spent. The collected data are processed with statistical methods, yielding numerical and graphical results. From the computation of statistical parameters and the forecasting with exponential smoothing, useful conclusions are arrived at that can be used by the Greek tourism authorities, as well as by tourist organizations, for planning purposes for the coming years. The results of this paper and the computed forecast can also be used for decision making by private tourist enterprises that are investing in Greece. With regard to the statistical methods, the method of Simple Exponential Smoothing of time series of data is employed. The search for a best forecast for 2017 and 2018 provides the value of the smoothing coefficient. For all statistical computations and graphics Microsoft Excel is used.

Keywords: Tourism, statistical methods, exponential smoothing, land spatial planning, economy, Microsoft Excel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681
245 A Proxy Multi-Signature Scheme with Anonymous Vetoable Delegation

Authors: Pei-yih Ting, Dream-Ming Huang, Xiao-Wei Huang

Abstract:

Frequently a group of people jointly decide and authorize a specific person as a representative in some business/poitical occasions, e.g., the board of a company authorizes the chief executive officer to close a multi-billion acquisition deal. In this paper, an integrated proxy multi-signature scheme that allows anonymously vetoable delegation is proposed. This protocol integrates mechanisms of private veto, distributed proxy key generation, secure transmission of proxy key, and existentially unforgeable proxy multi-signature scheme. First, a provably secure Guillou-Quisquater proxy signature scheme is presented, then the “zero-sharing" protocol is extended over a composite modulus multiplicative group, and finally the above two are combined to realize the GQ proxy multi-signature with anonymously vetoable delegation. As a proxy signature scheme, this protocol protects both the original signers and the proxy signer. The modular design allows simplified implementation with less communication overheads and better computation performance than a general secure multi-party protocol.

Keywords: GQ proxy signature, proxy multi-signature, zero-sharing protocol, secure multi-party protocol, private veto protocol

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
244 The Change in Management Accounting from an Institutional and Contingency Perspective: A Case Study for a Romanian Company

Authors: Gabriel Jinga, Madalina Dumitru

Abstract:

The objective of this paper is to present the process of change in management accounting in Romania, a former communist country from Eastern Europe. In order to explain this process, we used the contingency and institutional theories. We focused on the following directions: the presentation of the scientific context and motivation of this research and the case study. We presented the state of the art in the process of change in the management accounting from the international and national perspective. We also described the evolution of management accounting in Romania in the context of economic and political changes. An important moment was the fall of communism in 1989. This represents a starting point for a new economic environment and for new management accounting. Accordingly, we developed a case study which presented this evolution. The conclusion of our research was that the changes in the management accounting system of the company analysed occurred in the same time with the institutionalisation of some elements (e.g. degree of competition, training and competencies in management accounting). The management accounting system was modelled by the contingencies specific to this company (e.g. environment, industry, strategy).

Keywords: Management accounting, change, Romania, contingency and institutional theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
243 Variation of Spot Price and Profits of Andhra Pradesh State Grid in Deregulated Environment

Authors: Chava Sunil Kumar, P.S. Subrahmanyan, J. Amarnath

Abstract:

In this paper variation of spot price and total profits of the generating companies- through wholesale electricity trading are discussed with and without Central Generating Stations (CGS) share and seasonal variations are also considered. It demonstrates how proper analysis of generators- efficiencies and capabilities, types of generators owned, fuel costs, transmission losses and settling price variation using the solutions of Optimal Power Flow (OPF), can allow companies to maximize overall revenue. It illustrates how solutions of OPF can be used to maximize companies- revenue under different scenarios. And is also extended to computation of Available Transfer Capability (ATC) is very important to the transmission system security and market forecasting. From these results it is observed that how crucial it is for companies to plan their daily operations and is certainly useful in an online environment of deregulated power system. In this paper above tasks are demonstrated on 124 bus real-life Indian utility power system of Andhra Pradesh State Grid and results have been presented and analyzed.

Keywords: OPF, ATC, Electricity Market, Bid, Spot Price

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
242 Extended Well-Founded Semantics in Bilattices

Authors: Daniel Stamate

Abstract:

One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.

Keywords: Logic programs, imperfect information, multivalued logics, bilattices, assumptions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237
241 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
240 Low Complexity Multi Mode Interleaver Core for WiMAX with Support for Convolutional Interleaving

Authors: Rizwan Asghar, Dake Liu

Abstract:

A hardware efficient, multi mode, re-configurable architecture of interleaver/de-interleaver for multiple standards, like DVB, WiMAX and WLAN is presented. The interleavers consume a large part of silicon area when implemented by using conventional methods as they use memories to store permutation patterns. In addition, different types of interleavers in different standards cannot share the hardware due to different construction methodologies. The novelty of the work presented in this paper is threefold: 1) Mapping of vital types of interleavers including convolutional interleaver onto a single architecture with flexibility to change interleaver size; 2) Hardware complexity for channel interleaving in WiMAX is reduced by using 2-D realization of the interleaver functions; and 3) Silicon cost overheads reduced by avoiding the use of small memories. The proposed architecture consumes 0.18mm2 silicon area for 0.12μm process and can operate at a frequency of 140 MHz. The reduced complexity helps in minimizing the memory utilization, and at the same time provides strong support to on-the-fly computation of permutation patterns.

Keywords: Hardware interleaver implementation, WiMAX, DVB, block interleaver, convolutional interleaver, hardwaremultiplexing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
239 Development of Analytical Model of Bending Force during 3-Roller Conical Bending Process and Its Experimental Verification

Authors: Mahesh Chudasama, Harit Raval

Abstract:

Conical sections and shells made from metal plates are widely used in various industrial applications. 3-roller conical bending process is preferably used to produce such conical sections and shells. Bending mechanics involved in the process is complex and little work is done in this area. In the present paper an analytical model is developed to predict bending force which will be acting during 3-roller conical bending process. To verify the developed model, conical bending experiments are performed. Analytical results and experimental results were compared. Force predicted by analytical model is in close proximity of the experimental results. The error in the prediction is ±10%. Hence the model gives quite satisfactory results. Present model is also compared with the previously published bending force prediction model and it is found that the present model gives better results. The developed model can be used to estimate the bending force during 3-roller bending process and can be useful to the designers for designing the 3-roller conical bending machine.

Keywords: Bending-force, Experimental-verification, Internal-moment, Roll-bending.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3992
238 Collision Detection Algorithm Based on Data Parallelism

Authors: Zhen Peng, Baifeng Wu

Abstract:

Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.

Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1203
237 3D Point Cloud Model Color Adjustment by Combining Terrestrial Laser Scanner and Close Range Photogrammetry Datasets

Authors: M. Pepe, S. Ackermann, L. Fregonese, C. Achille

Abstract:

3D models obtained with advanced survey techniques such as close-range photogrammetry and laser scanner are nowadays particularly appreciated in Cultural Heritage and Archaeology fields. In order to produce high quality models representing archaeological evidences and anthropological artifacts, the appearance of the model (i.e. color) beyond the geometric accuracy, is not a negligible aspect. The integration of the close-range photogrammetry survey techniques with the laser scanner is still a topic of study and research. By combining point cloud data sets of the same object generated with both technologies, or with the same technology but registered in different moment and/or natural light condition, could construct a final point cloud with accentuated color dissimilarities. In this paper, a methodology to uniform the different data sets, to improve the chromatic quality and to highlight further details by balancing the point color will be presented.

Keywords: Color models, cultural heritage, laser scanner, photogrammetry, point cloud color.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
236 Multi-Objective Optimization for Performance-based Seismic Retrofit using Connection Upgrade

Authors: Dong-Chul Lee, Byung-Kwan Oh, Se-Woon Choi, Hyo-Sun Park

Abstract:

The unanticipated brittle fracture of connection of the steel moment resisting frame (SMRF) occurred in 1994 the Northridge earthquake. Since then, the researches for the vulnerability of connection of the existing SMRF and for rehabilitation of those buildings were conducted. This paper suggests performance-based optimal seismic retrofit technique using connection upgrade. For optimal design, a multi-objective genetic algorithm(NSGA-II) is used. One of the two objective functions is to minimize initial cost and another objective function is to minimize lifetime seismic damages cost. The optimal algorithm proposed in this paper is performed satisfying specified performance objective based on FEMA 356. The nonlinear static analysis is performed for structural seismic performance evaluation. A numerical example of SAC benchmark SMRF is provided using the performance-based optimal seismic retrofit technique proposed in this paper

Keywords: connection upgrade, performace-based seismicdesign, seismic retrofit, multi-objective optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
235 Logistics Model for Improving Quality in Railway Transport

Authors: Eva Nedeliakova, Juraj Camaj, Jaroslav Masek

Abstract:

This contribution is focused on the methodology for identifying levels of quality and improving quality through new logistics model in railway transport. It is oriented on the application of dynamic quality models, which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process within logistics chain can be taken into account. Various models describe the improvement of the quality which emphases the time factor throughout the whole transportation logistics chain. Quality of services in railway transport can be determined by the existing level of service quality, by detecting the causes of dissatisfaction employees but also customers, to uncover strengths and weaknesses. This new logistics model is able to recognize critical processes in logistic chain. It includes service quality rating that must respect its specific properties, which are unrepeatability, impalpability, their use right at the time they are provided and particularly changeability, which is significant factor in the conditions of rail transport as well. These peculiarities influence the quality of service regarding the constantly increasing requirements and that result in new ways of finding progressive attitudes towards the service quality rating.

Keywords: Logistics model, quality, railway transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
234 Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet

Authors: Talbi Mourad, Salhi Lotfi, Chérif Adnen

Abstract:

In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.

Keywords: Enhancement, spectral subtraction, SNR, discrete wavelet packet transform, spectral entropy Histogram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
233 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
232 Multi-Disciplinary Optimisation Methodology for Aircraft Load Prediction

Authors: Sudhir Kumar Tiwari

Abstract:

The paper demonstrates a methodology that can be used at an early design stage of any conventional aircraft. This research activity assesses the feasibility derivation of methodology for aircraft loads estimation during the various phases of design for a transport category aircraft by utilizing potential of using commercial finite element analysis software, which may drive significant time saving. Early Design phase have limited data and quick changing configuration results in handling of large number of load cases. It is useful to idealize the aircraft as a connection of beams, which can be very accurately modelled using finite element analysis (beam elements). This research explores the correct approach towards idealizing an aircraft using beam elements. FEM Techniques like inertia relief were studied for implementation during course of work. The correct boundary condition technique envisaged for generation of shear force, bending moment and torque diagrams for the aircraft. The possible applications of this approach are the aircraft design process, which have been investigated.

Keywords: Multi-disciplinary optimization, aircraft load, finite element analysis, Stick Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104