Search results for: linear estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2659

Search results for: linear estimation

409 A Comparison of Air Pollution in Developed and Developing Cities: A Case Study of London and Beijing

Authors: S. X. Sun, Q. Wang

Abstract:

With the rapid development of industrialization, countries in different stages of development in the world have gradually begun to pay attention to the impact of air pollution on health and the environment. Air control in developed countries is an effective reference for air control in developing countries. Artificial intelligence and other technologies also play a positive role in the prediction of air pollution. By comparing the annual changes of pollution in London and Beijing, this paper concludes that the pollution in developed cities is relatively low and stable, while the pollution in Beijing is relatively heavy and unstable, but is clearly improving. In addition, by analyzing the changes of major pollutants in Beijing in the past eight years, it is concluded that all pollutants except O3 show a significant downward trend. In addition, all pollutants except O3 have certain correlation. For example, PM10 and PM2.5 have the greatest influence on air quality index (AQI). Python, which is commonly used by artificial intelligence, is used as the main software to establish two models, support vector machine (SVM) and linear regression. By comparing the two models under the same conditions, it is concluded that SVM has higher accuracy in pollution prediction. The results of this study provide valuable reference for pollution control and prediction in developing countries.

Keywords: Air pollution, particulate matter, AQI, correlation coefficient, air pollution prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581
408 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
407 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization

Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun

Abstract:

This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.

Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
406 Multi Switched Split Vector Quantization of Narrowband Speech Signals

Authors: M. Satya Sai Ram, P. Siddaiah, M. Madhavi Latha

Abstract:

Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization (MSSVQ), which is a hybrid of Multi, switched, split vector quantization techniques. The spectral distortion performance, computational complexity, and memory requirements of MSSVQ are compared to split vector quantization (SVQ), multi stage vector quantization(MSVQ) and switched split vector quantization (SSVQ) techniques. It has been proved from results that MSSVQ has better spectral distortion performance, lower computational complexity and lower memory requirements when compared to all the above mentioned product code vector quantization techniques. Computational complexity is measured in floating point operations (flops), and memory requirements is measured in (floats).

Keywords: Linear predictive Coding, Multi stage vectorquantization, Switched Split vector quantization, Split vectorquantization, Line Spectral Frequencies (LSF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
405 Shifted Window Based Self-Attention via Swin Transformer for Zero-Shot Learning

Authors: Yasaswi Palagummi, Sareh Rowlands

Abstract:

Generalised Zero-Shot Learning, often known as GZSL, is an advanced variant of zero-shot learning in which the samples in the unseen category may be either seen or unseen. GZSL methods typically have a bias towards the seen classes because they learn a model to perform recognition for both the seen and unseen classes using data samples from the seen classes. This frequently leads to the misclassification of data from the unseen classes into the seen classes, making the task of GZSL more challenging. In this work, we propose an approach leveraging the Shifted Window based Self-Attention in the Swin Transformer (Swin-GZSL) to work in the inductive GZSL problem setting. We run experiments on three popular benchmark datasets: CUB, SUN, and AWA2, which are specifically used for ZSL and its other variants. The results show that our model based on Swin Transformer has achieved state-of-the-art harmonic mean for two datasets - AWA2 and SUN and near-state-of-the-art for the other dataset - CUB. More importantly, this technique has a linear computational complexity, which reduces training time significantly. We have also observed less bias than most of the existing GZSL models.

Keywords: Generalised Zero-shot Learning, Inductive Learning, Shifted-Window Attention, Swin Transformer, Vision Transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221
404 A Digital Pulse-Width Modulation Controller for High-Temperature DC-DC Power Conversion Application

Authors: Jingjing Lan, Jun Yu, Muthukumaraswamy Annamalai Arasu

Abstract:

This paper presents a digital non-linear pulse-width modulation (PWM) controller in a high-voltage (HV) buck-boost DC-DC converter for the piezoelectric transducer of the down-hole acoustic telemetry system. The proposed design controls the generation of output signal with voltage higher than the supply voltage and is targeted to work under high temperature. To minimize the power consumption and silicon area, a simple and efficient design scheme is employed to develop the PWM controller. The proposed PWM controller consists of serial to parallel (S2P) converter, data assign block, a mode and duty cycle controller (MDC), linearly PWM (LPWM) and noise shaper, pulse generator and clock generator. To improve the reliability of circuit operation at higher temperature, this design is fabricated with the 1.0-μm silicon-on-insulator (SOI) CMOS process. The implementation results validated that the proposed design has the advantages of smaller size, lower power consumption and robust thermal stability.

Keywords: DC-DC power conversion, digital control, high temperatures, pulse-width modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
403 A Structural Support Vector Machine Approach for Biometric Recognition

Authors: Vishal Awasthi, Atul Kumar Agnihotri

Abstract:

Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%.

Keywords: Face recognition, Principal Component Analysis, PCA, Linear Discriminant Analysis, LDA, Improved Support Vector Machine, iSVM, elastic bunch mapping technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 493
402 Possible Number of Dwelling Units Using Waste Plastic Bottle for Construction

Authors: Dibya Jivan Pati, Kazuhisa Iki, Riken Homma

Abstract:

Unlike other metro cities of India, Bhubaneswar–the capital city of Odisha, is expected to reach 1-million-mark population by now. The demands of dwelling unit requirement mostly among urban poor belonging to Economically Weaker section (EWS) and Low Income groups (LIG) is becoming a challenge due to high housing cost and rents. As a matter of fact, it’s also noted that, with increase in population, the solid waste generation also increases subsequently affecting the environment due to inefficiency in collection of waste by local government bodies. Methods of utilizing Solid Waste - especially in form of Plastic bottles, Glass bottles and Metal cans (PGM) are now widely used as an alternative material for construction of low-cost building by Non-Government Organizations (NGOs) in developing countries like India to help the urban poor afford a shelter. The application of disposed plastic bottle used in construction of single dwelling significantly reduces the overall cost of construction to as much as 14% compared to traditional construction material. Therefore, considering its cost-benefit result, it’s possible to provide housing to EWS and LIGs at an affordable price. In this paper, we estimated the quantity of plastic bottles generated in Bhubaneswar which further helped to estimate the possible number of single dwelling unit that can be constructed on yearly basis so as to refrain from further housing shortage. The estimation results will be practically used for planning and managing low-cost housing business by local government and NGOs.

Keywords: Construction, dwelling unit, plastic bottle, solid waste generation, groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
401 Gabriel-constrained Parametric Surface Triangulation

Authors: Oscar E. Ruiz, Carlos Cadavid, Juan G. Lalinde, Ricardo Serrano, Guillermo Peris-Fajarnes

Abstract:

The Boundary Representation of a 3D manifold contains FACES (connected subsets of a parametric surface S : R2 -! R3). In many science and engineering applications it is cumbersome and algebraically difficult to deal with the polynomial set and constraints (LOOPs) representing the FACE. Because of this reason, a Piecewise Linear (PL) approximation of the FACE is needed, which is usually represented in terms of triangles (i.e. 2-simplices). Solving the problem of FACE triangulation requires producing quality triangles which are: (i) independent of the arguments of S, (ii) sensitive to the local curvatures, and (iii) compliant with the boundaries of the FACE and (iv) topologically compatible with the triangles of the neighboring FACEs. In the existing literature there are no guarantees for the point (iii). This article contributes to the topic of triangulations conforming to the boundaries of the FACE by applying the concept of parameterindependent Gabriel complex, which improves the correctness of the triangulation regarding aspects (iii) and (iv). In addition, the article applies the geometric concept of tangent ball to a surface at a point to address points (i) and (ii). Additional research is needed in algorithms that (i) take advantage of the concepts presented in the heuristic algorithm proposed and (ii) can be proved correct.

Keywords: surface triangulation, conforming triangulation, surfacesampling, Gabriel complex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
400 Coded Transmission in Synthetic Transmit Aperture Ultrasound Imaging Method

Authors: Ihor Trots, Yuriy Tasinkevych, Andrzej Nowicki, Marcin Lewandowski

Abstract:

The paper presents the study of synthetic transmit aperture method applying the Golay coded transmission for medical ultrasound imaging. Longer coded excitation allows to increase the total energy of the transmitted signal without increasing the peak pressure. Signal-to-noise ratio and penetration depth are improved maintaining high ultrasound image resolution. In the work the 128-element linear transducer array with 0.3 mm inter-element spacing excited by one cycle and the 8 and 16-bit Golay coded sequences at nominal frequencies 4 MHz was used. Single element transmission aperture was used to generate a spherical wave covering the full image region and all the elements received the echo signals. The comparison of 2D ultrasound images of the wire phantom as well as of the tissue mimicking phantom is presented to demonstrate the benefits of the coded transmission. The results were obtained using the synthetic aperture algorithm with transmit and receive signals correction based on a single element directivity function.

Keywords: Golay coded sequences, radiation pattern, synthetic aperture, ultrasound imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2131
399 Speech Recognition Using Scaly Neural Networks

Authors: Akram M. Othman, May H. Riadh

Abstract:

This research work is aimed at speech recognition using scaly neural networks. A small vocabulary of 11 words were established first, these words are “word, file, open, print, exit, edit, cut, copy, paste, doc1, doc2". These chosen words involved with executing some computer functions such as opening a file, print certain text document, cutting, copying, pasting, editing and exit. It introduced to the computer then subjected to feature extraction process using LPC (linear prediction coefficients). These features are used as input to an artificial neural network in speaker dependent mode. Half of the words are used for training the artificial neural network and the other half are used for testing the system; those are used for information retrieval. The system components are consist of three parts, speech processing and feature extraction, training and testing by using neural networks and information retrieval. The retrieve process proved to be 79.5-88% successful, which is quite acceptable, considering the variation to surrounding, state of the person, and the microphone type.

Keywords: Feature extraction, Liner prediction coefficients, neural network, Speech Recognition, Scaly ANN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
398 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction

Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz

Abstract:

This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.

Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
397 Model Reduction of Linear Systems by Conventional and Evolutionary Techniques

Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil

Abstract:

Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.

Keywords: Reduced Order Modeling, Stability, Continued Fraction Expansions, Mihailov Stability Criterion, Particle Swarm Optimization, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
396 Improved Fuzzy Neural Modeling for Underwater Vehicles

Authors: O. Hassanein, Sreenatha G. Anavatti, Tapabrata Ray

Abstract:

The dynamics of the Autonomous Underwater Vehicles (AUVs) are highly nonlinear and time varying and the hydrodynamic coefficients of vehicles are difficult to estimate accurately because of the variations of these coefficients with different navigation conditions and external disturbances. This study presents the on-line system identification of AUV dynamics to obtain the coupled nonlinear dynamic model of AUV as a black box. This black box has an input-output relationship based upon on-line adaptive fuzzy model and adaptive neural fuzzy network (ANFN) model techniques to overcome the uncertain external disturbance and the difficulties of modelling the hydrodynamic forces of the AUVs instead of using the mathematical model with hydrodynamic parameters estimation. The models- parameters are adapted according to the back propagation algorithm based upon the error between the identified model and the actual output of the plant. The proposed ANFN model adopts a functional link neural network (FLNN) as the consequent part of the fuzzy rules. Thus, the consequent part of the ANFN model is a nonlinear combination of input variables. Fuzzy control system is applied to guide and control the AUV using both adaptive models and mathematical model. Simulation results show the superiority of the proposed adaptive neural fuzzy network (ANFN) model in tracking of the behavior of the AUV accurately even in the presence of noise and disturbance.

Keywords: AUV, AUV dynamic model, fuzzy control, fuzzy modelling, adaptive fuzzy control, back propagation, system identification, neural fuzzy model, FLNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
395 Modified Genome-Scale Metabolic Model of Escherichia coli by Adding Hyaluronic Acid Biosynthesis-Related Enzymes (GLMU2 and HYAD) from Pasteurella multocida

Authors: P. Pasomboon, P. Chumnanpuen, T. E-kobon

Abstract:

Hyaluronic acid (HA) consists of linear heteropolysaccharides repeat of D-glucuronic acid and N-acetyl-D-glucosamine. HA has various useful properties to maintain skin elasticity and moisture, reduce inflammation, and lubricate the movement of various body parts without causing immunogenic allergy. HA can be found in several animal tissues as well as in the capsule component of some bacteria including Pasteurella multocida. This study aimed to modify a genome-scale metabolic model of Escherichia coli using computational simulation and flux analysis methods to predict HA productivity under different carbon sources and nitrogen supplement by the addition of two enzymes (GLMU2 and HYAD) from P. multocida to improve the HA production under the specified amount of carbon sources and nitrogen supplements. Result revealed that threonine and aspartate supplement raised the HA production by 12.186%. Our analyses proposed the genome-scale metabolic model is useful for improving the HA production and narrows the number of conditions to be tested further.

Keywords: Pasteurella multocida, Escherichia coli, hyaluronic acid, genome-scale metabolic model, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
394 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

Authors: Vaso K. Kapnopoulou, Piero Caridis

Abstract:

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

Keywords: Lower hopper knuckle, high cycle fatigue, finite element method, dynamic load cases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996
393 Effects of Damper Locations and Base Isolators on Seismic Response of a Building Frame

Authors: Azin Shakibabarough, Mojtaba Valinejadshoubi, Ashutosh Bagchi

Abstract:

Structural vibration means repetitive motion that causes fatigue and reduction of the performance of a structure. An earthquake may release high amount of energy that can have adverse effect on all components of a structure. Therefore, decreasing of vibration or maintaining performance of structures such as bridges, dams, roads and buildings is important for life safety and reducing economic loss. When earthquake or any vibration happens, investigation on parts of a structure which sustain the seismic loads is mandatory to provide a safe condition for the occupants. One of the solutions for reducing the earthquake vibration in a structure is using of vibration control devices such as dampers and base isolators. The objective of this study is to investigate the optimal positions of friction dampers and base isolators for better seismic response of 2D frame. For this purpose, a two bay and six story frame with different distribution formats was modeled and some of their responses to earthquake such as inter-story drift, max joint displacement, max axial force and max bending moment were determined and compared using non-linear dynamic analysis.

Keywords: Fast nonlinear analysis, friction damper, base isolator, seismic vibration control, seismic response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
392 Seismic Behavior of Steel Moment-Resisting Frames for Uplift Permitted in Near-Fault Regions

Authors: M. Tehranizadeh, E. Shoushtari Rezvani

Abstract:

Seismic performance of steel moment-resisting frame structures is investigated considering nonlinear soil-structure interaction (SSI) effects. 10-, 15-, and 20-story planar building frames with aspect ratio of 3 are designed in accordance with current building codes. Inelastic seismic demands of the superstructure are considered using concentrated plasticity model. The raft foundation system is designed for different soil types. Beam-on-nonlinear Winkler foundation (BNWF) is used to represent dynamic impedance of the underlying soil. Two sets of pulse-like as well as no-pulse near-fault earthquakes are used as input ground motions. The results show that the reduction in drift demands due to nonlinear SSI is characterized by a more uniform distribution pattern along the height when compared to the fixed-base and linear SSI condition. It is also concluded that beneficial effects of nonlinear SSI on displacement demands is more significant in case of pulse-like ground motions and performance level of the steel moment-resisting frames can be enhanced.

Keywords: Soil-structure interaction, uplifting, soil plasticity, near-fault earthquake, tall building.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138
391 Research on a Forest Fire Spread Simulation Driven by the Wind Field in Complex Terrain

Authors: Ying Shang, Chencheng Wang

Abstract:

The wind field is the main driving factor for the spread of forest fires. For the simulation results of forest fire spread to be more accurate, it is necessary to obtain more detailed wind field data. Therefore, this paper studied the mountainous fine wind field simulation method coupled with WRF (Weather Research and Forecasting Model) and CFD (Computational Fluid Dynamics) to realize the numerical simulation of the wind field in a mountainous area with a scale of 30 m and a small measurement error. Local topographical changes have an important impact on the wind field. Based on the Rothermel fire spread model, a forest fire in Idaho in the western United States was simulated. The historical data proved that the simulation results had a good accuracy. They showed that the fire spread rate will decrease rapidly with time and then reach a steady state. After reaching a steady state, the fire spread growth area will not only be affected by the slope, but will also show a significant quadratic linear positive correlation with the wind speed change.

Keywords: Wind field, numerical simulation, forest fire spread, fire behavior model, complex terrain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 376
390 Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method

Authors: Prashant P. Bedekar, Sudhir R. Bhide, Vijay S. Kale

Abstract:

Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.

Keywords: Constrained optimization, LPP, Overcurrent relaycoordination, Two-phase simplex method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3007
389 Analytical Cutting Forces Model of Helical Milling Operations

Authors: Changyi Liu, Gui Wang, Matthew Dargusch

Abstract:

Helical milling operations are used to generate or enlarge boreholes by means of a milling tool. The bore diameter can be adjusted through the diameter of the helical path. The kinematics of helical milling on a three axis machine tool is analysed firstly. The relationships between processing parameters, cutting tool geometry characters with machined hole feature are formulated. The feed motion of the cutting tool has been decomposed to plane circular feed and axial linear motion. In this paper, the time varying cutting forces acted on the side cutting edges and end cutting edges of the flat end cylinder miller is analysed using a discrete method separately. These two components then are combined to produce the cutting force model considering the complicated interaction between the cutters and workpiece. The time varying cutting force model describes the instantaneous cutting force during processing. This model could be used to predict cutting force, calculate statics deflection of cutter and workpiece, and also could be the foundation of dynamics model and predicting chatter limitation of the helical milling operations.

Keywords: Helical milling, Hole machining, Cutting force, Analytical model, Time domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3148
388 Optimal Placement of Piezoelectric Actuators on Plate Structures for Active Vibration Control Using Modified Control Matrix and Singular Value Decomposition Approach

Authors: Deepak Chhabra, Gian Bhushan, Pankaj Chandna

Abstract:

The present work deals with the optimal placement of piezoelectric actuators on a thin plate using Modified Control Matrix and Singular Value Decomposition (MCSVD) approach. The problem has been formulated using the finite element method using ten piezoelectric actuators on simply supported plate to suppress first six modes. The sizes of ten actuators are combined to outline one actuator by adding the ten columns of control matrix to form a column matrix. The singular value of column control matrix is considered as the fitness function and optimal positions of the actuators are obtained by maximizing it with GA. Vibration suppression has been studied for simply supported plate with piezoelectric patches in optimal positions using Linear Quadratic regulator) scheme. It is observed that MCSVD approach has given the position of patches adjacent to each-other, symmetric to the centre axis and given greater vibration suppression than other previously published results on SVD. 

Keywords: Closed loop Average dB gain, Genetic Algorithm (GA), LQR Controller, MCSVD, Optimal positions, Singular Value Decomposition (SVD) Approaches.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3073
387 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: Climate, reanalysis, renewable energy, solar radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
386 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790
385 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame

Authors: Seyed Saeid Tabaee, Omid Bahar

Abstract:

Nowadays, energy dissipation devices are commonly used in structures. High rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements, specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely make analysis and design process complicated. This effect may be generally represented by Equivalent Viscous Damping (EVD). The equivalent viscous damping might be obtained from the expected hysteretic behavior regarding to the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel Moment Resisting Frame (MRF), which its performance is enhanced by a Buckling Restrained Brace (BRB) system has been evaluated. Having foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural system frequency. Two MRF structures, one equipped with BRB and the other without BRB are simultaneously studied. Extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.

Keywords: Buckling restrained brace, Direct displacement based design, Dual systems, Hysteretic damping, Moment resisting frames.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474
384 A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves

Authors: Hanan Ahmed-Hosni Mahmoud, Nadia Al-Ghreimil

Abstract:

In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.

Keywords: Auxiliary storage sorting, in-place sorting, sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
383 The Impact of Bank Consolidation on the Performance of SMES in Nigeria

Authors: Okolo Chimaobi Valentine

Abstract:

This paper seeks to assess the implications of bank consolidation on the performance of small and medium scale enterprises in the Nigerian economy. Multiple linear regression technique and correlation matrix test were employed to measure the extent to which small and medium scale enterprises asset size, survival and access to credit were influenced. The result showed that bank deposit (BD) and bank credit (L or BC) impacted on asset size and survival of small and medium scale enterprises. None of the variables had significant impact on SMEs access to credit. There is a shift of focus by commercial banks away from small and medium scale enterprises (small customers), which is evidenced by the significant negative influence of bank credit to both the survival and asset size of small and medium enterprises. While micro finance banks work hard at providing funds to small and medium scale entrepreneurs, their capacity to meet the needs of these entrepreneurs is constrained. CBN should make policies that will boost micro finance bank’s capital and also monitor closely the management of the banks to ensure prudent financing of small and medium scale investments.

Keywords: Bank consolidation, small and medium enterprises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3907
382 Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach

Authors: Ahmed Kamil Hasan Al-Ali, Bouchra Senadji, Ganesh Naik

Abstract:

We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR.

Keywords: Noisy forensic speaker verification, ICA algorithm, MFCC, MFCC feature warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
381 Nonlinear Analysis of Shear Wall Using Finite Element Model

Authors: M. A. Ghorbani, M. Pasbani Khiavi, F. Rezaie Moghaddam

Abstract:

In the analysis of structures, the nonlinear effects due to large displacement, large rotation and materially-nonlinear are very important and must be considered for the reliable analysis. The non-linear fmite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of fmite element code using the standard Galerkin weighted residual formulation. Two-dimensional plane stress model was carried out to present the shear wall response. Total Lagangian formulation, which is computationally more effective, is used in the formulation of stiffness matrices and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The details of the program formulation are highlighted and the results of the analyses are presented, along with a comparison of the response of the structure with Ansys software results. The presented model in this paper can be developed for nonlinear analysis of civil engineering structures with different material behavior and complicated geometry.

Keywords: Finite element, large displacements, materially nonlinear, shear wall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
380 Dynamic Power Reduction in Sequential Circuits Using Look Ahead Clock Gating Technique

Authors: R. Manjith, C. Muthukumari

Abstract:

In this paper, a novel Linear Feedback Shift Register (LFSR) with Look Ahead Clock Gating (LACG) technique is presented to reduce the power consumption in modern processors and System-on-Chip. Clock gating is a predominant technique used to reduce unwanted switching of clock signals. Several clock gating techniques to reduce the dynamic power have been developed, of which LACG is predominant. LACG computes the clock enabling signals of each flip-flop (FF) one cycle ahead of time, based on the present cycle data of the flip-flops on which it depends. It overcomes the timing problems in the existing clock gating methods like datadriven clock gating and Auto-Gated flip-flops (AGFF) by allotting a full clock cycle for the determination of the clock enabling signals. Further to reduce the power consumption in LACG technique, FFs can be grouped so that they share a common clock enabling signal. Simulation results show that the novel grouped LFSR with LACG achieves 15.03% power savings than conventional LFSR with LACG and 44.87% than data-driven clock gating.

Keywords: AGFF, data-driven, LACG, LFSR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744