Search results for: test method.
7946 Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise
Authors: Kamaldeep Joshi, Rajkumar Yadav, Sachin Allwadhi
Abstract:
Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image steganography is used for watermarking. The watermarking is an application of the steganography. The watermark contains 80*88 pixels and each pixel requirs 8 bits for its binary equivalent form so, the total number of bits required to hide the watermark are 80*88*8(56320). The experiment was performed on standard 256*256 and 512*512 size images. After the watermark insertion, histogram analysis was performed. A noise factor (salt and pepper) of 0.02 was added to the stego image in order to evaluate the robustness of the method. The watermark was successfully retrieved after insertion of noise. An experiment was performed in order to know the imperceptibility of stego and the retrieved watermark. It is clear that the LSB watermarking scheme is robust to the salt and pepper noise.Keywords: LSB, watermarking, salt and pepper, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10537945 Experimental Behavior of Composite Shear Walls Having L Shape Steel Sections in Boundary Regions
Authors: S. Bahadır Yüksel, Alptuğ Ünal
Abstract:
The Composite Shear Walls (CSW) with steel encased profiles can be used as lateral-load resisting systems for buildings that require considerable large lateral-load capacity. The aim of this work is to propose the experimental work conducted on CSW having L section folded plate (L shape steel made-up sections) as longitudinal reinforcement in boundary regions. The study in this paper present the experimental test conducted on CSW having L section folded plate as longitudinal reinforcement in boundary regions. The tested 1/3 geometric scaled CSW has aspect ratio of 3.2. L-shape structural steel materials with 2L-19x57x7mm dimensions were placed in shear wall boundary zones. The seismic behavior of CSW test specimen was investigated by evaluating and interpreting the hysteresis curves, envelope curves, rigidity and consumed energy graphs of this tested element. In addition to this, the experimental results, deformation and cracking patterns were evaluated, interpreted and suggestions of the design recommendations were proposed.Keywords: Shear wall, composite shear wall, boundary reinforcement, earthquake resistant structural design, L section.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18357944 Traffic Density Measurement by Automatic Detection of Vehicles Using Gradient Vectors from Aerial Images
Authors: Saman Ghaffarian, Ilgın Gökasar
Abstract:
This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value.Keywords: Aerial images, intelligent transportation systems, traffic density measurement, vehicle detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29367943 Exploring Inter-Relationships between Events to Identify Strategic Technological Competencies: A Combined Approach
Authors: Cláudio Santos, Madalena Araújo, Nuno Correia
Abstract:
The inherent complexity in nowadays- business environments is forcing organizations to be attentive to the dynamics in several fronts. Therefore, the management of technological innovation is continually faced with uncertainty about the future. These issues lead to a need for a systemic perspective, able to analyze the consequences of interactions between different factors. The field of technology foresight has proposed methods and tools to deal with this broader perspective. In an attempt to provide a method to analyze the complex interactions between events in several areas, departing from the identification of the most strategic competencies, this paper presents a methodology based on the Delphi method and Quality Function Deployment. This methodology is applied in a sheet metal processing equipment manufacturer, as a case study.Keywords: Competencies, Delphi Method, Quality Function Deployment, Technology Foresight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17037942 Statistical Reliability Based Modeling of Series and Parallel Operating Systems using Extreme Value Theory
Authors: Mohamad Mahdavi, Mojtaba Mahdavi
Abstract:
This paper tries to represent a new method for computing the reliability of a system which is arranged in series or parallel model. In this method we estimate life distribution function of whole structure using the asymptotic Extreme Value (EV) distribution of Type I, or Gumbel theory. We use EV distribution in minimal mode, for estimate the life distribution function of series structure and maximal mode for parallel system. All parameters also are estimated by Moments method. Reliability function and failure (hazard) rate and p-th percentile point of each function are determined. Other important indexes such as Mean Time to Failure (MTTF), Mean Time to repair (MTTR), for non-repairable and renewal systems in both of series and parallel structure will be computed.Keywords: Reliability, extreme value, parallel, series, lifedistribution
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20907941 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings
Authors: A. Ince
Abstract:
In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to nonproportional loading paths.Keywords: Elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25577940 Determination of Penicillins Residues in Livestock and Marine Products by LC/MS/MS
Authors: Ji Young Song, Soo Jung Hu, Hyunjin Joo, Joung Boon Hwang, Mi Ok Kim, Shin Jung Kang, Dae Hyun Cho
Abstract:
Multi-residue analysis method for penicillins was developed and validated in bovine muscle, chicken, milk, and flatfish. Detection was based on liquid chromatography tandem mass spectrometry (LC/MS/MS). The developed method was validated for specificity, precision, recovery, and linearity. The analytes were extracted with 80% acetonitrile and clean-up by a single reversed-phase solid-phase extraction step. Six penicillins presented recoveries higher than 76% with the exception of Amoxicillin (59.7%). Relative standard deviations (RSDs) were not more than 10%. LOQs values ranged from 0.1 and to 4.5 ug/kg. The method was applied to 128 real samples. Benzylpenicillin was detected in 15 samples and Cloxacillin was detected in 7 samples. Oxacillin was detected in 2 samples. But the detected levels were under the MRL levels for penicillins in samples.Keywords: Penicillins, livestock product, Multi-residue analysis, LC/MS/MS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34187939 Studding of Number of Dataset on Precision of Estimated Saturated Hydraulic Conductivity
Authors: M. Siosemarde, M. Byzedi
Abstract:
Saturated hydraulic conductivity of Soil is an important property in processes involving water and solute flow in soils. Saturated hydraulic conductivity of soil is difficult to measure and can be highly variable, requiring a large number of replicate samples. In this study, 60 sets of soil samples were collected at Saqhez region of Kurdistan province-IRAN. The statistics such as Correlation Coefficient (R), Root Mean Square Error (RMSE), Mean Bias Error (MBE) and Mean Absolute Error (MAE) were used to evaluation the multiple linear regression models varied with number of dataset. In this study the multiple linear regression models were evaluated when only percentage of sand, silt, and clay content (SSC) were used as inputs, and when SSC and bulk density, Bd, (SSC+Bd) were used as inputs. The R, RMSE, MBE and MAE values of the 50 dataset for method (SSC), were calculated 0.925, 15.29, -1.03 and 12.51 and for method (SSC+Bd), were calculated 0.927, 15.28,-1.11 and 12.92, respectively, for relationship obtained from multiple linear regressions on data. Also the R, RMSE, MBE and MAE values of the 10 dataset for method (SSC), were calculated 0.725, 19.62, - 9.87 and 18.91 and for method (SSC+Bd), were calculated 0.618, 24.69, -17.37 and 22.16, respectively, which shows when number of dataset increase, precision of estimated saturated hydraulic conductivity, increases.Keywords: dataset, precision, saturated hydraulic conductivity, soil and statistics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17937938 Development of Combined Cure Type for Rigid Pavement with Reactive Powder Concrete
Authors: Fatih Hattatoglu, Abdulrezzak Bakiş
Abstract:
In this study, fiberless reactive powder concrete (RPC) was produced with high pressure and flexural strength. C30/37 concrete was chosen as the control sample. In this study, 9 different cure types were applied to fiberless RPC. the most suitable combined cure type was selected according to the pressure and flexure strength. Pressure and flexural strength tests were applied to these samples after curing. As a result of the study, the combined cure type with the highest pressure resistance was obtained. The highest pressure resistance was achieved with consecutive standard water cure at 20 °C for 7 days – hot water cure at 90 °C for 2 days - drying oven cure at 180 °C for 2 days. As a result of the study, the highest pressure resistance of fiberless RPC was found as 123 MPa with water cure at 20 °C for 7 days - hot water cure at 90 °C for 2 days - drying oven cure at 180 °C for 2 days; and the highest flexural resistance was found as 8.37 MPa for the same combined cure type.
Keywords: Rigid pavement, reactive powder concrete, combined cure, pressure test, flexural test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13987937 Mapping of C* Elements in Finite Element Method using Transformation Matrix
Authors: G. H. Majzoob, B. Sharifi Hamadani
Abstract:
Mapping between local and global coordinates is an important issue in finite element method, as all calculations are performed in local coordinates. The concern arises when subparametric are used, in which the shape functions of the field variable and the geometry of the element are not the same. This is particularly the case for C* elements in which the extra degrees of freedoms added to the nodes make the elements sub-parametric. In the present work, transformation matrix for C1* (an 8-noded hexahedron element with 12 degrees of freedom at each node) is obtained using equivalent C0 elements (with the same number of degrees of freedom). The convergence rate of 8-noded C1* element is nearly equal to its equivalent C0 element, while it consumes less CPU time with respect to the C0 element. The existence of derivative degrees of freedom at the nodes of C1* element along with excellent convergence makes it superior compared with it equivalent C0 element.Keywords: Mapping, Finite element method, C* elements, Convergence, C0 elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31497936 Optimal Capacitor Placement in a Radial Distribution System using Plant Growth Simulation Algorithm
Authors: R. Srinivasa Rao, S. V. L. Narasimham
Abstract:
This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9, 34, and 85-bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.
Keywords: Distribution systems, Capacitor placement, loss reduction, Loss sensitivity factors, PGSA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52827935 Optimizing Dialogue Strategy Learning Using Learning Automata
Authors: G. Kumaravelan, R. Sivakumar
Abstract:
Modeling the behavior of the dialogue management in the design of a spoken dialogue system using statistical methodologies is currently a growing research area. This paper presents a work on developing an adaptive learning approach to optimize dialogue strategy. At the core of our system is a method formalizing dialogue management as a sequential decision making under uncertainty whose underlying probabilistic structure has a Markov Chain. Researchers have mostly focused on model-free algorithms for automating the design of dialogue management using machine learning techniques such as reinforcement learning. But in model-free algorithms there exist a dilemma in engaging the type of exploration versus exploitation. Hence we present a model-based online policy learning algorithm using interconnected learning automata for optimizing dialogue strategy. The proposed algorithm is capable of deriving an optimal policy that prescribes what action should be taken in various states of conversation so as to maximize the expected total reward to attain the goal and incorporates good exploration and exploitation in its updates to improve the naturalness of humancomputer interaction. We test the proposed approach using the most sophisticated evaluation framework PARADISE for accessing to the railway information system.Keywords: Dialogue management, Learning automata, Reinforcement learning, Spoken dialogue system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16117934 Determination of Resistance to Freezing of Bonded Façade Joint
Authors: B. Nečasová, P. Liška, J. Šlanhof
Abstract:
Verification of vented wooden façade system with bonded joints is presented in this paper. The potential of bonded joints is studied and described in more detail. The paper presents the results of an experimental and theoretical research about the effects of freeze cycling on the bonded joint. For the purpose of tests spruce timber profiles were chosen for the load bearing substructure. Planks from wooden plastic composite and Siberian larch are representing facade cladding. Two types of industrial polyurethane adhesives intended for structural bonding were selected. The article is focused on the preparation as well as on the subsequent curing and conditioning of test samples. All test samples were subjected to 15 cycles that represents sudden temperature changes, i.e. immersion in a water bath at (293.15 ± 3) K for 6 hours and subsequent freezing to (253.15 ± 2) K for 18 hours. Furthermore, the retention of bond strength between substructure and cladding wastested and strength in shear was determined under tensile stress.Research data indicate that little, if any, damage to the bond results from freezingcycles. Additionally, the suitability of selected group of adhesives in combination with timber substructure was confirmed.
Keywords: Adhesive system, bonded joints, wooden lightweight façade, timber substructure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46867933 Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition
Authors: Mohammed Rziza, Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali, Driss Aboutajdine
Abstract:
In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.Keywords: Curvelet, Linear Discriminant Analysis (LDA) , Contourlet, Discreet Wavelet Transform, DWT, Block-based analysis, face recognition (FR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18087932 A Monte Carlo Method to Data Stream Analysis
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham
Abstract:
Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14177931 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter
Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer
Abstract:
This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised. The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.
Keywords: Electrostatic precipitators, air quality, particulates emissions, electron microscopy, ImageJ.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11557930 Multiscale Blind Image Restoration with a New Method
Authors: Alireza Mallahzadeh, Hamid Dehghani, Iman Elyasi
Abstract:
A new method, based on the normal shrink and modified version of Katssagelous and Lay, is proposed for multiscale blind image restoration. The method deals with the noise and blur in the images. It is shown that the normal shrink gives the highest S/N (signal to noise ratio) for image denoising process. The multiscale blind image restoration is divided in two sections. The first part of this paper proposes normal shrink for image denoising and the second part of paper proposes modified version of katssagelous and Lay for blur estimation and the combination of both methods to reach a multiscale blind image restoration.Keywords: Multiscale blind image restoration, image denoising, blur estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17227929 On Identity Disclosure Risk Measurement for Shared Microdata
Authors: M. N. Huda, S. Yamada, N. Sonehara
Abstract:
Probability-based identity disclosure risk measurement may give the same overall risk for different anonymization strategy of the same dataset. Some entities in the anonymous dataset may have higher identification risks than the others. Individuals are more concerned about higher risks than the average and are more interested to know if they have a possibility of being under higher risk. A notation of overall risk in the above measurement method doesn-t indicate whether some of the involved entities have higher identity disclosure risk than the others. In this paper, we have introduced an identity disclosure risk measurement method that not only implies overall risk, but also indicates whether some of the members have higher risk than the others. The proposed method quantifies the overall risk based on the individual risk values, the percentage of the records that have a risk value higher than the average and how larger the higher risk values are compared to the average. We have analyzed the disclosure risks for different disclosure control techniques applied to original microdata and present the results.Keywords: Anonymization, microdata, disclosure risk, privacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13657928 Use of Fuzzy Logic in the Corporate Reputation Assessment: Stock Market Investors’ Perspective
Authors: Tomasz L. Nawrocki, Danuta Szwajca
Abstract:
The growing importance of reputation in building enterprise value and achieving long-term competitive advantage creates the need for its measurement and evaluation for the management purposes (effective reputation and its risk management). The paper presents practical application of self-developed corporate reputation assessment model from the viewpoint of stock market investors. The model has a pioneer character and example analysis performed for selected industry is a form of specific test for this tool. In the proposed solution, three aspects - informational, financial and development, as well as social ones - were considered. It was also assumed that the individual sub-criteria will be based on public sources of information, and as the calculation apparatus, capable of obtaining synthetic final assessment, fuzzy logic will be used. The main reason for developing this model was to fulfill the gap in the scope of synthetic measure of corporate reputation that would provide higher degree of objectivity by relying on "hard" (not from surveys) and publicly available data. It should be also noted that results obtained on the basis of proposed corporate reputation assessment method give possibilities of various internal as well as inter-branch comparisons and analysis of corporate reputation impact.Keywords: Corporate reputation, fuzzy logic, fuzzy model, stock market investors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13717927 Effects of Roughness Elements on Heat Transfer during Natural Convection
Abstract:
The present study focused on the investigation of the effects of roughness elements on heat transfer during natural convection in a rectangular cavity using numerical technique. Roughness elements were introduced on the bottom hot wall with a normalized amplitude (A*/H) of 0.1. Thermal and hydrodynamic behaviors were studied using computational method based on Lattice Boltzmann method (LBM). Numerical studies were performed for a laminar flow in the range of Rayleigh number (Ra) from 103 to 106 for a rectangular cavity of aspect ratio (L/H) 2.0 with a fluid of Prandtl number (Pr) 1.0. The presence of the sinusoidal roughness elements caused a minimum to maximum decrease in the heat transfer as 7% to 17% respectively compared to smooth enclosure. The results are presented for mean Nusselt number (Nu), isotherms and streamlines.Keywords: Natural convection, Rayleigh number, surface roughness, Nusselt number, Lattice Boltzmann Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17177926 The Effect of Geometry Dimensions on the Earthquake Response of the Finite Element Method
Authors: Morteza Jiryaei Sharahi
Abstract:
In this paper, the effect of width and height of the model on the earthquake response in the finite element method is discussed. For this purpose an earth dam as a soil structure under earthquake has been considered. Various dam-foundation models are analyzed by Plaxis, a finite element package for solving geotechnical problems. The results indicate considerable differences in the seismic responses.Keywords: Geometry dimensions, finite element, earthquake
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22167925 Fracture Characterization of Plain Woven Fabric Glass-Epoxy Composites
Authors: Sabita Rani Sahoo, A.Mishra
Abstract:
Delamination between layers in composite materials is a major structural failure. The delamination resistance is quantified by the critical strain energy release rate (SERR). The present investigation deals with the strain energy release rate of two woven fabric composites. Materials used are made of two types of glass fiber (360 gsm and 600 gsm) of plain weave and epoxy as matrix. The fracture behavior is studied using the mode I, double cantilever beam test and the mode II, end notched flexure test, in order to determine the energy required for the initiation and growth of an artificial crack. The delamination energy of these two materials is compared in order to study the effect of weave and reinforcement on mechanical properties. The fracture mechanism is also analyzed by means of scanning electron microscopy (SEM). It is observed that the plain weave fabric composite with lesser strand width has higher inter laminar fracture properties compared to the plain weave fabric composite with more strand width.
Keywords: Glass- epoxy composites, Fracture Tests: mode I (DCB) and mode II (ENF), Delamination, Calculation of strain energy release rate, SEM Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32537924 Copy-Move Image Forgery Detection in Virtual Electrostatic Field
Authors: Michael Zimba, Darlison Nyirenda
Abstract:
A novel copy-move image forgery, CMIF, detection method is proposed. The proposed method presents a new approach which relies on electrostatic field theory, EFT. Solely for the purpose of reducing the dimension of a suspicious image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of the suspicious image and extracts only the approximation subband. The extracted subband is then bijectively mapped onto a virtual electrostatic field where concepts of EFT are utilized to extract robust features. The extracted features are invariant to additive noise, JPEG compression, and affine transformation. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. SATS is a better option than the common shift vector method because SATS is insensitive to affine transformation. Consequently, the proposed CMIF algorithm is not only fast but also more robust to attacks compared to the existing related CMIF algorithms. The experimental results show high detection rates, as high as 100% in some cases.
Keywords: Affine transformation, Radix sort, SATS, Virtual electrostatic field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18167923 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings
Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank
Abstract:
Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].Keywords: data mining, protein secondary structure prediction, parallelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15967922 The Influence of the Geogrid Layers on the Bearing Capacity of Layered Soils
Authors: S. A. Naeini, H. R. Rahmani, M. Hossein Zade
Abstract:
Many classical bearing capacity theories assume that the natural soil's layers are homogenous for determining the bearing capacity of the soil. But, in many practical projects, we encounter multi-layer soils. Geosynthetic as reinforcement materials have been extensively used in the construction of various structures. In this paper, numerical analysis of the Plate Load Test (PLT) using of ABAQUS software in double-layered soils with different thicknesses of sandy and gravelly layers reinforced with geogrid was considered. The PLT is one of the common filed methods to calculate parameters such as soil bearing capacity, the evaluation of the compressibility and the determination of the Subgrade Reaction module. In fact, the influence of the geogrid layers on the bearing capacity of the layered soils is investigated. Finally, the most appropriate mode for the distance and number of reinforcement layers is determined. Results show that using three layers of geogrid with a distance of 0.3 times the width of the loading plate has the highest efficiency in bearing capacity of double-layer (sand and gravel) soils. Also, the significant increase in bearing capacity between unreinforced and reinforced soil with three layers of geogrid is caused by the condition that the upper layer (gravel) thickness is equal to the loading plate width.
Keywords: Bearing capacity, reinforcement, geogrid, plate load test, layered soils.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8447921 Effectiveness of Moringa oleifera Coagulant Protein as Natural Coagulant aid in Removal of Turbidity and Bacteria from Turbid Waters
Authors: B. Bina, M.H. Mehdinejad, Gunnel Dalhammer, Guna RajaraoM. Nikaeen, H. Movahedian Attar
Abstract:
Coagulation of water involves the use of coagulating agents to bring the suspended matter in the raw water together for settling and the filtration stage. Present study is aimed to examine the effects of aluminum sulfate as coagulant in conjunction with Moringa Oleifera Coagulant Protein as coagulant aid on turbidity, hardness, and bacteria in turbid water. A conventional jar test apparatus was employed for the tests. The best removal was observed at a pH of 7 to 7.5 for all turbidities. Turbidity removal efficiency was resulted between % 80 to % 99 by Moringa Oleifera Coagulant Protein as coagulant aid. Dosage of coagulant and coagulant aid decreased with increasing turbidity. In addition, Moringa Oleifera Coagulant Protein significantly has reduced the required dosage of primary coagulant. Residual Al+3 in treated water were less than 0.2 mg/l and meets the environmental protection agency guidelines. The results showed that turbidity reduction of % 85.9- % 98 paralleled by a primary Escherichia coli reduction of 1-3 log units (99.2 – 99.97%) was obtained within the first 1 to 2 h of treatment. In conclusions, Moringa Oleifera Coagulant Protein as coagulant aid can be used for drinking water treatment without the risk of organic or nutrient release. We demonstrated that optimal design method is an efficient approach for optimization of coagulation-flocculation process and appropriate for raw water treatment.Keywords: MOCP, Coagulant aid, turbidity removal, E.coliremoval, water, treatment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35417920 Satellite Imagery Classification Based on Deep Convolution Network
Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu
Abstract:
Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.
Keywords: Satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23467919 Rough Set Based Intelligent Welding Quality Classification
Authors: L. Tao, T. J. Sun, Z. H. Li
Abstract:
The knowledge base of welding defect recognition is essentially incomplete. This characteristic determines that the recognition results do not reflect the actual situation. It also has a further influence on the classification of welding quality. This paper is concerned with the study of a rough set based method to reduce the influence and improve the classification accuracy. At first, a rough set model of welding quality intelligent classification has been built. Both condition and decision attributes have been specified. Later on, groups of the representative multiple compound defects have been chosen from the defect library and then classified correctly to form the decision table. Finally, the redundant information of the decision table has been reducted and the optimal decision rules have been reached. By this method, we are able to reclassify the misclassified defects to the right quality level. Compared with the ordinary ones, this method has higher accuracy and better robustness.Keywords: intelligent decision, rough set, welding defects, welding quality level
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16007918 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis
Authors: Isao Taguchi, Yasuo Sugai
Abstract:
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14307917 Analysis of the Structural Fluctuation of the Permitted Building Areas and Housing Distribution Ratios - Focused on 5 Cities Including Bucheon
Authors: Cheon Sik Min, Hyeong Wook Song, Sook Yeon Shim, Hoon Chang
Abstract:
The purpose of this study was to analyze the correlation between permitted building areas and housing distribution ratios and their fluctuation, and test a distribution model during 3 successive governments in 5 cities including Bucheon in reference to the time series administrative data, and thereby, interpret the results of the analysis in association with the policies pursued by the successive governments to examine the structural fluctuation of permitted building areas and housing distribution ratios. In order to analyze the fluctuation of permitted building areas and housing distribution ratios during 3 successive governments and examine the cycles of the time series data, the spectral analysis was performed, and in order to analyze the correlation between permitted building areas and housing distribution ratios, the tabulation was performed to describe the correlations statistically, and in order to explain about differences of fluctuation distribution of permitted building areas and housing distribution ratios among 3 governments, the goodness of fit test was conducted.Keywords: The Permitted Building Areas, Housing Distribution Ratios, the Structural Fluctuation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1195