Search results for: Numerical computation
1656 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships
Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang
Abstract:
In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.Keywords: Ice slurry, seawater pipe, ice packing fraction, numerical simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13791655 A Proxy Multi-Signature Scheme with Anonymous Vetoable Delegation
Authors: Pei-yih Ting, Dream-Ming Huang, Xiao-Wei Huang
Abstract:
Frequently a group of people jointly decide and authorize a specific person as a representative in some business/poitical occasions, e.g., the board of a company authorizes the chief executive officer to close a multi-billion acquisition deal. In this paper, an integrated proxy multi-signature scheme that allows anonymously vetoable delegation is proposed. This protocol integrates mechanisms of private veto, distributed proxy key generation, secure transmission of proxy key, and existentially unforgeable proxy multi-signature scheme. First, a provably secure Guillou-Quisquater proxy signature scheme is presented, then the “zero-sharing" protocol is extended over a composite modulus multiplicative group, and finally the above two are combined to realize the GQ proxy multi-signature with anonymously vetoable delegation. As a proxy signature scheme, this protocol protects both the original signers and the proxy signer. The modular design allows simplified implementation with less communication overheads and better computation performance than a general secure multi-party protocol.Keywords: GQ proxy signature, proxy multi-signature, zero-sharing protocol, secure multi-party protocol, private veto protocol
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15431654 Variation of Spot Price and Profits of Andhra Pradesh State Grid in Deregulated Environment
Authors: Chava Sunil Kumar, P.S. Subrahmanyan, J. Amarnath
Abstract:
In this paper variation of spot price and total profits of the generating companies- through wholesale electricity trading are discussed with and without Central Generating Stations (CGS) share and seasonal variations are also considered. It demonstrates how proper analysis of generators- efficiencies and capabilities, types of generators owned, fuel costs, transmission losses and settling price variation using the solutions of Optimal Power Flow (OPF), can allow companies to maximize overall revenue. It illustrates how solutions of OPF can be used to maximize companies- revenue under different scenarios. And is also extended to computation of Available Transfer Capability (ATC) is very important to the transmission system security and market forecasting. From these results it is observed that how crucial it is for companies to plan their daily operations and is certainly useful in an online environment of deregulated power system. In this paper above tasks are demonstrated on 124 bus real-life Indian utility power system of Andhra Pradesh State Grid and results have been presented and analyzed.Keywords: OPF, ATC, Electricity Market, Bid, Spot Price
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18141653 Extended Well-Founded Semantics in Bilattices
Authors: Daniel Stamate
Abstract:
One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.Keywords: Logic programs, imperfect information, multivalued logics, bilattices, assumptions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12661652 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.
Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13231651 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.
Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11981650 Low Complexity Multi Mode Interleaver Core for WiMAX with Support for Convolutional Interleaving
Authors: Rizwan Asghar, Dake Liu
Abstract:
A hardware efficient, multi mode, re-configurable architecture of interleaver/de-interleaver for multiple standards, like DVB, WiMAX and WLAN is presented. The interleavers consume a large part of silicon area when implemented by using conventional methods as they use memories to store permutation patterns. In addition, different types of interleavers in different standards cannot share the hardware due to different construction methodologies. The novelty of the work presented in this paper is threefold: 1) Mapping of vital types of interleavers including convolutional interleaver onto a single architecture with flexibility to change interleaver size; 2) Hardware complexity for channel interleaving in WiMAX is reduced by using 2-D realization of the interleaver functions; and 3) Silicon cost overheads reduced by avoiding the use of small memories. The proposed architecture consumes 0.18mm2 silicon area for 0.12μm process and can operate at a frequency of 140 MHz. The reduced complexity helps in minimizing the memory utilization, and at the same time provides strong support to on-the-fly computation of permutation patterns.Keywords: Hardware interleaver implementation, WiMAX, DVB, block interleaver, convolutional interleaver, hardwaremultiplexing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20361649 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12351648 Analytical and Numerical Results for Free Vibration of Laminated Composites Plates
Authors: Mohamed Amine Ben Henni, Taher Hassaine Daouadji, Boussad Abbes, Yu Ming Li, Fazilay Abbes
Abstract:
The reinforcement and repair of concrete structures by bonding composite materials have become relatively common operations. Different types of composite materials can be used: carbon fiber reinforced polymer (CFRP), glass fiber reinforced polymer (GFRP) as well as functionally graded material (FGM). The development of analytical and numerical models describing the mechanical behavior of structures in civil engineering reinforced by composite materials is necessary. These models will enable engineers to select, design, and size adequate reinforcements for the various types of damaged structures. This study focuses on the free vibration behavior of orthotropic laminated composite plates using a refined shear deformation theory. In these models, the distribution of transverse shear stresses is considered as parabolic satisfying the zero-shear stress condition on the top and bottom surfaces of the plates without using shear correction factors. In this analysis, the equation of motion for simply supported thick laminated rectangular plates is obtained by using the Hamilton’s principle. The accuracy of the developed model is demonstrated by comparing our results with solutions derived from other higher order models and with data found in the literature. Besides, a finite-element analysis is used to calculate the natural frequencies of laminated composite plates and is compared with those obtained by the analytical approach.
Keywords: Composites materials, laminated composite plate, shear deformation theory of plates, finite element analysis, free vibration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8541647 A Numerical Strategy to Design Maneuverable Micro-Biomedical Swimming Robots Based on Biomimetic Flagellar Propulsion
Authors: Arash Taheri, Meysam Mohammadi-Amin, Seyed Hossein Moosavy
Abstract:
Medical applications are among the most impactful areas of microrobotics. The ultimate goal of medical microrobots is to reach currently inaccessible areas of the human body and carry out a host of complex operations such as minimally invasive surgery (MIS), highly localized drug delivery, and screening for diseases at their very early stages. Miniature, safe and efficient propulsion systems hold the key to maturing this technology but they pose significant challenges. A new type of propulsion developed recently, uses multi-flagella architecture inspired by the motility mechanism of prokaryotic microorganisms. There is a lack of efficient methods for designing this type of propulsion system. The goal of this paper is to overcome the lack and this way, a numerical strategy is proposed to design multi-flagella propulsion systems. The strategy is based on the implementation of the regularized stokeslet and rotlet theory, RFT theory and new approach of “local corrected velocity". The effects of shape parameters and angular velocities of each flagellum on overall flow field and on the robot net forces and moments are considered. Then a multi-layer perceptron artificial neural network is designed and employed to adjust the angular velocities of the motors for propulsion control. The proposed method applied successfully on a sample configuration and useful demonstrative results is obtained.Keywords: Artificial Neural Network, Biomimetic Microrobots, Flagellar Propulsion, Swimming Robots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19111646 Spectral Entropy Employment in Speech Enhancement based on Wavelet Packet
Authors: Talbi Mourad, Salhi Lotfi, Chérif Adnen
Abstract:
In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.
Keywords: Enhancement, spectral subtraction, SNR, discrete wavelet packet transform, spectral entropy Histogram
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19921645 Evaluation of Chiller Power Consumption Using Grey Prediction
Authors: Tien-Shun Chan, Yung-Chung Chang, Cheng-Yu Chu, Wen-Hui Chen, Yuan-Lin Chen, Shun-Chong Wang, Chang-Chun Wang
Abstract:
98% of the energy needed in Taiwan has been imported. The prices of petroleum and electricity have been increasing. In addition, facility capacity, amount of electricity generation, amount of electricity consumption and number of Taiwan Power Company customers have continued to increase. For these reasons energy conservation has become an important topic. In the past linear regression was used to establish the power consumption models for chillers. In this study, grey prediction is used to evaluate the power consumption of a chiller so as to lower the total power consumption at peak-load (so that the relevant power providers do not need to keep on increasing their power generation capacity and facility capacity). In grey prediction, only several numerical values (at least four numerical values) are needed to establish the power consumption models for chillers. If PLR, the temperatures of supply chilled-water and return chilled-water, and the temperatures of supply cooling-water and return cooling-water are taken into consideration, quite accurate results (with the accuracy close to 99% for short-term predictions) may be obtained. Through such methods, we can predict whether the power consumption at peak-load will exceed the contract power capacity signed by the corresponding entity and Taiwan Power Company. If the power consumption at peak-load exceeds the power demand, the temperature of the supply chilled-water may be adjusted so as to reduce the PLR and hence lower the power consumption.Keywords: Gery system theory, grey prediction, chller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25791644 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets
Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar
Abstract:
Numerical studies have been carried out using a validated two-dimensional standard k-omega turbulence model for the design optimization of a thrust vector control system using shock induced self-impinging supersonic secondary double jet. Parametric analytical studies have been carried out at different secondary injection locations to identifying the highest unsymmetrical distribution of the main gas flow due to shock waves, which produces a desirable side force more lucratively for vectoring. The results from the parametric studies of the case on hand reveal that the shock induced self-impinging supersonic secondary double jet is more efficient in certain locations at the divergent region of a CD nozzle than a case with supersonic single jet with same mass flow rate. We observed that the best axial location of the self-impinging supersonic secondary double jet nozzle with a given jet interaction angle, built-in to a CD nozzle having area ratio 1.797, is 0.991 times the primary nozzle throat diameter from the throat location. We also observed that the flexible steering is possible after invoking ON/OFF facility to the secondary nozzles for meeting the onboard mission requirements. Through our case studies we concluded that the supersonic self-impinging secondary double jet at predesigned jet interaction angle and location can provide more flexible steering options facilitating with 8.81% higher thrust vectoring efficiency than the conventional supersonic single secondary jet without compromising the payload capability of any supersonic aerospace vehicle.Keywords: Fluidic thrust vectoring, rocket steering, self-impinging secondary supersonic jet, TVC in aerospace vehicles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26811643 A Semi-Fragile Signature based Scheme for Ownership Identification and Color Image Authentication
Authors: M. Hamad Hassan, S.A.M. Gilani
Abstract:
In this paper, a novel scheme is proposed for ownership identification and authentication using color images by deploying Cryptography and Digital Watermarking as underlaying technologies. The former is used to compute the contents based hash and the latter to embed the watermark. The host image that will claim to be the rightful owner is first transformed from RGB to YST color space exclusively designed for watermarking based applications. Geometrically YS ÔèÑ T and T channel corresponds to the chrominance component of color image, therefore suitable for embedding the watermark. The T channel is divided into 4×4 nonoverlapping blocks. The size of block is important for enhanced localization, security and low computation. Each block along with ownership information is then deployed by SHA160, a one way hash function to compute the content based hash, which is always unique and resistant against birthday attack instead of using MD5 that may raise the condition i.e. H(m)=H(m'). The watermark payload varies from block to block and computed by the variance factorα . The quality of watermarked images is quite high both subjectively and objectively. Our scheme is blind, computationally fast and exactly locates the tampered region.
Keywords: Hash Collision, LSB, MD5, PSNR, SHA160.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15631642 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks
Authors: Siliang Wang, Minghui Wang, Jun Hu
Abstract:
A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.Keywords: pruning method, stochastic, time-varying networks, optimal path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18541641 Data Hiding by Vector Quantization in Color Image
Authors: Yung-Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: Data hiding, vector quantization, watermark.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17761640 Palmprint based Cancelable Biometric Authentication System
Authors: Ying-Han Pang, Andrew Teoh Beng Jin, David Ngo Chek Ling
Abstract:
A cancelable palmprint authentication system proposed in this paper is specifically designed to overcome the limitations of the contemporary biometric authentication system. In this proposed system, Geometric and pseudo Zernike moments are employed as feature extractors to transform palmprint image into a lower dimensional compact feature representation. Before moment computation, wavelet transform is adopted to decompose palmprint image into lower resolution and dimensional frequency subbands. This reduces the computational load of moment calculation drastically. The generated wavelet-moment based feature representation is used to generate cancelable verification key with a set of random data. This private binary key can be canceled and replaced. Besides that, this key also possesses high data capture offset tolerance, with highly correlated bit strings for intra-class population. This property allows a clear separation of the genuine and imposter populations, as well as zero Equal Error Rate achievement, which is hardly gained in the conventional biometric based authentication system.Keywords: Cancelable biometric authenticator, Discrete- Hashing, Moments, Palmprint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15651639 Faster FPGA Routing Solution using DNA Computing
Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon
Abstract:
There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.Keywords: FPGA, Routing, DNA Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15921638 Modeling and Simulation of Two-Phase Interleaved Boost Converter Using Open-Source Software Scilab/Xcos
Authors: Yin Yin Phyo, Tun Lin Naing
Abstract:
This paper investigated the simulation of two-phase interleaved boost converter (IBC) with free and open-source software Scilab/Xcos. By using interleaved method, it can reduce current stress on components, components size, input current ripple and output voltage ripple. The required mathematical model is obtained from the equivalent circuit of its different four modes of operation for simulation. The equivalent circuits are considered in continuous conduction mode (CCM). The average values of the system variables are derived from the state-space equation to find the equilibrium point. Scilab is now becoming more and more popular among students, engineers and scientists because it is open-source software and free of charge. It gives a great convenience because it has powerful computation and simulation function. The waveforms of output voltage, input current and inductors current are obtained by using Scilab/Xcos.
Keywords: Two-phase boost converter, continuous conduction mode, free and open-source, interleaved method, dynamic simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9441637 Dynamic Analysis of Porous Media Using Finite Element Method
Authors: M. Pasbani Khiavi, A. R. M. Gharabaghi, K. Abedi
Abstract:
The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.
Keywords: Dynamic analysis, Interaction, Porous media, time domain
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18761636 A Simple Deterministic Model for the Spread of Leptospirosis in Thailand
Authors: W. Triampo, D. Baowan, I.M. Tang, N. Nuttavut, J. Wong-Ekkabut, G. Doungchawee
Abstract:
In this work, we consider a deterministic model for the transmission of leptospirosis which is currently spreading in the Thai population. The SIR model which incorporates the features of this disease is applied to the epidemiological data in Thailand. It is seen that the numerical solutions of the SIR equations are in good agreement with real empirical data. Further improvements are discussed.Keywords: Leptospirosis, SIR Model, Deterministic model, Thailand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19871635 Characterization of Printed Reflectarray Elements on Variable Substrate Thicknesses
Authors: M. Y. Ismail, Arslan Kiyani
Abstract:
Narrow bandwidth and high loss performance limits the use of reflectarray antennas in some applications. This article reports on the feasibility of employing strategic reflectarray resonant elements to characterize the reflectivity performance of reflectarrays in X-band frequency range. Strategic reflectarray resonant elements incorporating variable substrate thicknesses ranging from 0.016λ to 0.052λ have been analyzed in terms of reflection loss and reflection phase performance. The effect of substrate thickness has been validated by using waveguide scattering parameter technique. It has been demonstrated that as the substrate thickness is increased from 0.508mm to 1.57mm the measured reflection loss of dipole element decreased from 5.66dB to 3.70dB with increment in 10% bandwidth of 39MHz to 64MHz. Similarly the measured reflection loss of triangular loop element is decreased from 20.25dB to 7.02dB with an increment in 10% bandwidth of 12MHz to 23MHz. The results also show a significant decrease in the slope of reflection phase curve as well. A Figure of Merit (FoM) has also been defined for the comparison of static phase range of resonant elements under consideration. Moreover, a novel numerical model based on analytical equations has been established incorporating the material properties of dielectric substrate and electrical properties of different reflectarray resonant elements to obtain the progressive phase distribution for each individual reflectarray resonant element.
Keywords: Numerical model, Reflectarray resonant elements, Scattering parameter measurements, Variable substrate thickness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17231634 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.
Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7661633 FEA for Transient Responses of an S-Shaped Force Transducer with a Viscoelastic Absorber Using a Nonlinear Complex Spring
Authors: T. Yamaguchi, Y. Fujii, A. Takita, T. Kanai
Abstract:
To compute dynamic characteristics of nonlinear viscoelastic springs with elastic structures having huge degree-of-freedom, Yamaguchi proposed a new fast numerical method using finite element method [1]-[2]. In this method, restoring forces of the springs are expressed using power series of their elongation. In the expression, nonlinear hysteresis damping is introduced. In this expression, nonlinear complex spring constants are introduced. Finite element for the nonlinear spring having complex coefficients is expressed and is connected to the elastic structures modeled by linear solid finite element. Further, to save computational time, the discrete equations in physical coordinate are transformed into the nonlinear ordinary coupled equations using normal coordinate corresponding to linear natural modes. In this report, the proposed method is applied to simulation for impact responses of a viscoelastic shock absorber with an elastic structure (an S-shaped structure) by colliding with a concentrated mass. The concentrated mass has initial velocities and collides with the shock absorber. Accelerations of the elastic structure and the concentrated mass are measured using Levitation Mass Method proposed by Fujii [3]. The calculated accelerations from the proposed FEM, corresponds to the experimental ones. Moreover, using this method, we also investigate dynamic errors of the S-shaped force transducer due to elastic mode in the S-shaped structure.
Keywords: Transient response, Finite Element analysis, Numerical analysis, Viscoelastic shock absorber, Force transducer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17561632 EEG Correlates of Trait and Mathematical Anxiety during Lexical and Numerical Error-Recognition Tasks
Authors: Alexander N. Savostyanov, Tatiana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Tatiana A. Golovko, Yulia V. Kovas
Abstract:
EEG correlates of mathematical and trait anxiety level were studied in 52 healthy Russian-speakers during execution of error-recognition tasks with lexical, arithmetic and algebraic conditions. Event-related spectral perturbations were used as a measure of brain activity. The ERSP plots revealed alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three conditions. The correlates of anxiety were found in theta (4-8 Hz) and beta2 (16- 20 Hz) frequency bands. In theta band the effects of mathematical anxiety were stronger expressed in lexical, than in arithmetic and algebraic condition. The mathematical anxiety effects in theta band were associated with differences between anterior and posterior cortical areas, whereas the effects of trait anxiety were associated with inter-hemispherical differences. In beta1 and beta2 bands effects of trait and mathematical anxiety were directed oppositely. The trait anxiety was associated with increase of amplitude of desynchronization, whereas the mathematical anxiety was associated with decrease of this amplitude. The effect of mathematical anxiety in beta2 band was insignificant for lexical condition but was the strongest in algebraic condition. EEG correlates of anxiety in theta band could be interpreted as indexes of task emotionality, whereas the reaction in beta2 band is related to tension of intellectual resources.Keywords: EEG, brain activity, lexical and numerical error-recognition tasks, mathematical and trait anxiety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19361631 Improving Cryptographically Generated Address Algorithm in IPv6 Secure Neighbor Discovery Protocol through Trust Management
Authors: M. Moslehpour, S. Khorsandi
Abstract:
As transition to widespread use of IPv6 addresses has gained momentum, it has been shown to be vulnerable to certain security attacks such as those targeting Neighbor Discovery Protocol (NDP) which provides the address resolution functionality in IPv6. To protect this protocol, Secure Neighbor Discovery (SEND) is introduced. This protocol uses Cryptographically Generated Address (CGA) and asymmetric cryptography as a defense against threats on integrity and identity of NDP. Although SEND protects NDP against attacks, it is computationally intensive due to Hash2 condition in CGA. To improve the CGA computation speed, we parallelized CGA generation process and used the available resources in a trusted network. Furthermore, we focused on the influence of the existence of malicious nodes on the overall load of un-malicious ones in the network. According to the evaluation results, malicious nodes have adverse impacts on the average CGA generation time and on the average number of tries. We utilized a Trust Management that is capable of detecting and isolating the malicious node to remove possible incentives for malicious behavior. We have demonstrated the effectiveness of the Trust Management System in detecting the malicious nodes and hence improving the overall system performance.
Keywords: NDP, SEND, CGA, modifier, malicious node.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12061630 A Materialized View Approach to Support Aggregation Operations over Long Periods in Sensor Networks
Authors: Minsoo Lee, Julee Choi, Sookyung Song
Abstract:
The increasing interest on processing data created by sensor networks has evolved into approaches to implement sensor networks as databases. The aggregation operator, which calculates a value from a large group of data such as computing averages or sums, etc. is an essential function that needs to be provided when implementing such sensor network databases. This work proposes to add the DURING clause into TinySQL to calculate values during a specific long period and suggests a way to implement the aggregation service in sensor networks by applying materialized view and incremental view maintenance techniques that is used in data warehouses. In sensor networks, data values are passed from child nodes to parent nodes and an aggregation value is computed at the root node. As such root nodes need to be memory efficient and low powered, it becomes a problem to recompute aggregate values from all past and current data. Therefore, applying incremental view maintenance techniques can reduce the memory consumption and support fast computation of aggregate values.Keywords: Aggregation, Incremental View Maintenance, Materialized view, Sensor Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15401629 Influence of Dilution and Lean-premixed on Mild Combustion in an Industrial Burner
Authors: Sh.Khalilarya, H.Oryani, S.Jafarmadar, H.Khatamnezhad, A.Nemati
Abstract:
Understanding of how and where NOx formation occurs in industrial burner is very important for efficient and clean operation of utility burners. Also the importance of this problem is mainly due to its relation to the pollutants produced by more burners used widely of gas turbine in thermal power plants and glass and steel industry. In this article, a numerical model of an industrial burner operating in MILD combustion is validated with experimental data.. Then influence of air flow rate and air temperature on combustor temperature profiles and NOX product are investigated. In order to modification this study reports on the effects of fuel and air dilution (with inert gases H2O, CO2, N2), and also influence of lean-premixed of fuel, on the temperature profiles and NOX emission. Conservation equations of mass, momentum and energy, and transport equations of species concentrations, turbulence, combustion and radiation modeling in addition to NO modeling equations were solved together to present temperature and NO distribution inside the burner. The results shows that dilution, cause to a reduction in value of temperature and NOX emission, and suppresses any flame propagation inside the furnace and made the flame inside the furnace invisible. Dilution with H2O rather than N2 and CO2 decreases further the value of the NOX. Also with raise of lean-premix level, local temperature of burner and the value of NOX product are decreases because of premixing prevents local “hot spots" within the combustor volume that can lead to significant NOx formation. Also leanpremixing of fuel with air cause to amount of air in reaction zone is reach more than amount that supplied as is actually needed to burn the fuel and this act lead to limiting NOx formationKeywords: Mild combustion, Flameless, Numerical simulation, Burner, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17761628 Measurement of Operational and Environmental Performance of the Coal-Fired Power Plants in India by Using Data Envelopment Analysis
Authors: Vijay Kumar Bajpai, Sudhir Kumar Singh
Abstract:
In this study, the performance analyses of the twenty five Coal-Fired Power Plants (CFPPs) used for electricity generation are carried out through various Data Envelopment Analysis (DEA) models. Three efficiency indices are defined and pursued. During the calculation of the operational performance, energy and non-energy variables are used as input, and net electricity produced is used as desired output (Model-1). CO2 emitted to the environment is used as the undesired output (Model-2) in the computation of the pure environmental performance while in Model-3 CO2 emissions is considered as detrimental input in the calculation of operational and environmental performance. Empirical results show that most of the plants are operating in increasing returns to scale region and Mettur plant is efficient one with regards to energy use and environment. The result also indicates that the undesirable output effect is insignificant in the research sample. The present study will provide clues to plant operators towards raising the operational and environmental performance of CFPPs.
Keywords: Coal fired power plants, environmental performance, data envelopment analysis, operational performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23611627 Risk Assessment in Durations and Costs for Construction of Industrial Facilities in Egypt Using Equations and Computer
Authors: M. Kamal Elbokl, Negadi Kheira
Abstract:
Risk Evaluation is an important step in protecting your workers and your business, as well as complying with the law. It helps you focus on the risks that really matter in your workplace – the ones with the potential to cause real harm. We are in this paper introduce basics of risk assessment then we mention some of ways to risk evaluation by computer especially Monte Carlo simulation and Microsoft project.
We use Program Evaluation and Review Technique (PERT) to deal with Risks in Industrial Facilities in Evaluation and Assessment for this risk. Using PERT Technique in Microsoft Project by the PERT toolbar and using PERTMASTER Program with Primavera Program we evaluate many hazards and make calculations for that by mathematical equation to make right decisions. We define and calculate risk factor and risk severity to ranking the type of the risk then dealing with it using in that many ways like probability computation, curves, and tables. By introducing variables in the equation of functions in computer programs we calculate the risk in the time and the cost in general case and then mention some examples in industrial facilities field.
Keywords: Risk, Industrial Facilities, PERT, Monte Carlo Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1954