Search results for: estimation for the number of the blind sources
4180 Improved Feature Processing for Iris Biometric Authentication System
Authors: Somnath Dey, Debasis Samanta
Abstract:
Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.Keywords: Iris recognition, biometric, feature processing, patternrecognition, pattern matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21414179 Performance Evaluation of Music and Minimum Norm Eigenvector Algorithms in Resolving Noisy Multiexponential Signals
Authors: Abdussamad U. Jibia, Momoh-Jimoh E. Salami
Abstract:
Eigenvector methods are gaining increasing acceptance in the area of spectrum estimation. This paper presents a successful attempt at testing and evaluating the performance of two of the most popular types of subspace techniques in determining the parameters of multiexponential signals with real decay constants buried in noise. In particular, MUSIC (Multiple Signal Classification) and minimum-norm techniques are examined. It is shown that these methods perform almost equally well on multiexponential signals with MUSIC displaying better defined peaks.
Keywords: Eigenvector, minimum norm, multiexponential, subspace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17384178 Spatio-temporal Variations in Heavy Metal Concentrations in Sediment of Qua Iboe River Estuary, Nigeria
Authors: Justina I. R. Udotong, Ime R. Udotong, Offiong U. Eka
Abstract:
The concentrations of heavy metals in sediments of Qua Iboe River Estuary (QIRE) were monitored at four different sampling locations in wet and dry seasons. A preliminary survey to determine the four sampling stations along the river continuum showed that the area spanned between <0.1‰ salinity at the control station and 21.5‰ at the fourth station along the river continuum. A preliminary survey to determine the four sampling locations along the river estuary showed variations in salinity and other physicochemical parameters. The estuary was found to be polluted with heavy metals from point and nonpoint sources at varying degrees. Mean values of 7.80 mg/kg, 4.97 mg/kg and 2.80 mg/kg of nickel were obtained for sediment samples from Douglas creek, Qua Iboe and Atlantic sampling locations, respectively in the dry season. The wet season nickel concentrations were however lower. The entire study area was grossly contaminated by iron. At Douglas creek, the concentration of iron in sediment was 9274 ± 9.54mg/kg while copper, nickel, lead and vanadium were <0.5mg/kg each as compared to iron. Bioaccumulation was therefore suspected within the study area as values of 31.00 ± 0.79, 36.00 ± 0.10 and 55.00 ± 0.05 mg/kg of zinc were recorded in sediment at Douglas creek, Atlantic and the control sampling locations. The results from this study showed that the source of these heavy metals were from point sources like the corrosion of metal steel pipes from old bridges as well as oily sludge wastes from the Qua Iboe Terminal / tank farm located within the vicinity of the study area.Keywords: Heavy metal, Qua Iboe River Estuary, seasonal variations, sediment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21224177 A Content-Based Optimization of Data Stream Television Multiplex
Authors: Jaroslav Polec, Martin Šimek, Michal Martinovič, Elena Šikudová
Abstract:
The television multiplex has reserved capacity and therefore we can use only limited number of videos for propagation of it. Appropriate composition of the multiplex has a major impact on how many videos is spread by multiplex. Therefore in this paper is designed a simple algorithm to optimize capacity utilization multiplex. Significant impact on the number of programs in the multiplex has also the fact from which programs is composed. Content of multiplex can be movies, news, sport, animated stories, documentaries, etc. These types have their own specific characteristics that affect their resulting data stream. In this paper is also done an impact analysis of the composition of the multiplex to use its capacity by video content.
Keywords: Multiplex, content, group of pictures, frame, capacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14754176 Combustion Improvements by C4/C5 Bio-Alcohol Isomer Blended Fuels Combined with Supercharging and EGR in a Diesel Engine
Authors: Yasufumi Yoshimoto, Enkhjargal Tserenochir, Eiji Kinoshita, Takeshi Otaka
Abstract:
Next generation bio-alcohols produced from non-food based sources like cellulosic biomass are promising renewable energy sources. The present study investigates engine performance, combustion characteristics, and emissions of a small single cylinder direct injection diesel engine fueled by four kinds of next generation bio-alcohol isomer and diesel fuel blends with a constant blending ratio of 3:7 (mass). The tested bio-alcohol isomers here are n-butanol and iso-butanol (C4 alcohol), and n-pentanol and iso-pentanol (C5 alcohol). To obtain simultaneous reductions in NOx and smoke emissions, the experiments employed supercharging combined with EGR (Exhaust Gas Recirculation). The boost pressures were fixed at two conditions, 100 kPa (naturally aspirated operation) and 120 kPa (supercharged operation) provided with a roots blower type supercharger. The EGR rates were varied from 0 to 25% using a cooled EGR technique. The results showed that both with and without supercharging, all the bio-alcohol blended diesel fuels improved the trade-off relation between NOx and smoke emissions at all EGR rates while maintaining good engine performance, when compared with diesel fuel operation. It was also found that regardless of boost pressure and EGR rate, the ignition delays of the tested bio-alcohol isomer blends are in the order of iso-butanol > n-butanol > iso-pentanol > n-pentanol. Overall, it was concluded that, except for the changes in the ignition delays the influence of bio-alcohol isomer blends on the engine performance, combustion characteristics, and emissions are relatively small.
Keywords: Alternative fuel, Butanol, Diesel engine, EGR, Next generation bio-alcohol isomer blended fuel, Pentanol, Supercharging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7464175 Effect of Cowpea (Vigna sinensis L.) with Maize (Zea mays L.) Intercropping on Yield and Its Components
Authors: W. A. Hamd Alla, E. M. Shalaby, R. A. Dawood, A. A. Zohry
Abstract:
A field experiment was carried out at Arab El- Awammer Research Station, Agric. Res. Center. Assiut Governorate during summer seasons of 2013 and 2014. The present study assessed the effect of cowpea with maize intercropping on yield and its components. The experiment comprised of three treatments (sole cowpea, sole maize and cowpea-maize intercrop). The experimental design was a randomized complete block with four replications. Results indicated that intercropped maize plants with cowpea, exhibited greater potentiality and resulted in higher values of most of the studied criteria viz., plant height, number of ears/plant, number of rows/ear, number of grains/row, grains weight/ear, 100–grain weight and straw and grain yields. Fresh and dry forage yields of cowpea were lower in intercropping with maize than sole. Furthermore, the combined of the two seasons revealed that the total Land Equivalent Ratio (LER) between cowpea and maize was 1.65. The Aggressivity (A) maize was 0.45 and cowpea was -0.45. This showed that maize was the dominant crop, whereas cowpea was the dominated. The Competitive Ratio (CR) indicated that maize more competitive than cowpea, maize was 1.75 and cowpea was 0.57. The Actual Yield Loss (AYL) maize was 0.05 and cowpea was -0.40. The Monetary Advantage Index (MAI) was 2360.80.
Keywords: Intercropping, cowpea, maize, land equivalent ratio (LER).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52024174 Optimal External Merge Sorting Algorithm with Smart Block Merging
Authors: Mir Hadi Seyedafsari, Iraj Hasanzadeh
Abstract:
Like other external sorting algorithms, the presented algorithm is a two step algorithm including internal and external steps. The first part of the algorithm is like the other similar algorithms but second part of that is including a new easy implementing method which has reduced the vast number of inputoutput operations saliently. As decreasing processor operating time does not have any effect on main algorithm speed, any improvement in it should be done through decreasing the number of input-output operations. This paper propose an easy algorithm for choose the correct record location of the final list. This decreases the time complexity and makes the algorithm faster.Keywords: External sorting algorithm, internal sortingalgorithm, fast sorting, robust algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21894173 Estimating Enzyme Kinetic Parameters from Apparent KMs and Vmaxs
Authors: Simon Brown, Noorzaid Muhamad, David C Simcock
Abstract:
The kinetic properties of enzymes are often reported using the apparent KM and Vmax appropriate to the standard Michaelis-Menten enzyme. However, this model is inappropriate to enzymes that have more than one substrate or where the rate expression does not apply for other reasons. Consequently, it is desirable to have a means of estimating the appropriate kinetic parameters from the apparent values of KM and Vmax reported for each substrate. We provide a means of estimating the range within which the parameters should lie and apply the method to data for glutamate dehydrogenase from the nematode parasite of sheep Teladorsagia circumcincta.Keywords: enzyme kinetics, glutamate dehydrogenase, intervalanalysis, parameter estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19654172 Parkinsons Disease Classification using Neural Network and Feature Selection
Authors: Anchana Khemphila, Veera Boonjing
Abstract:
In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It-s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor-s expertise and experience.But still cases are reported of wrong diagnosis and treatment. Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%.
Keywords: Data mining, classification, Parkinson disease, artificial neural networks, feature selection, information gain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37794171 A Novel, Cost-effective Design to Harness Ocean Energy in the Developing Countries
Authors: S. Ayub, S.N. Danish, S.R. Qureshi
Abstract:
The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.
Keywords: Clean Energy, Wave energy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18464170 Particle Swarm Optimization and Quantum Particle Swarm Optimization to Multidimensional Function Approximation
Authors: Diogo Silva, Fadul Rodor, Carlos Moraes
Abstract:
This work compares the results of multidimensional function approximation using two algorithms: the classical Particle Swarm Optimization (PSO) and the Quantum Particle Swarm Optimization (QPSO). These algorithms were both tested on three functions - The Rosenbrock, the Rastrigin, and the sphere functions - with different characteristics by increasing their number of dimensions. As a result, this study shows that the higher the function space, i.e. the larger the function dimension, the more evident the advantages of using the QPSO method compared to the PSO method in terms of performance and number of necessary iterations to reach the stop criterion.Keywords: PSO, QPSO, function approximation, AI, optimization, multidimensional functions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9794169 A Fully Parallel Reverse Converter
Authors: Mehdi Hosseinzadeh, Amir Sabbagh Molahosseini, Keivan Navi
Abstract:
The residue number system (RNS) is popular in high performance computation applications because of its carry-free nature. The challenges of RNS systems design lie in the moduli set selection and in the reverse conversion from residue representation to weighted representation. In this paper, we proposed a fully parallel reverse conversion algorithm for the moduli set {rn - 2, rn - 1, rn}, based on simple mathematical relationships. Also an efficient hardware realization of this algorithm is presented. Our proposed converter is very faster and results to hardware savings, compared to the other reverse converters.Keywords: Reverse converter, residue to weighted converter, residue number system, multiple-valued logic, computer arithmetic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15844168 Solvatochromic Shift and Estimation of Dipole Moment of Quinine Sulphate Dication
Abstract:
Absorption and fluorescence spectra of quinine sulphate (QSD) have been recorded at room temperature in wide range of solvents of different polarities. The ground-state dipole moment of QSD was obtained from quantum mechanical calculations and the excited state dipole moment of QSD was estimated from Bakhshiev-s and Kawski-Chamma-Viallet-s equations by means of solvatochromic shift method. Higher value of dipole moment is observed for excited state as compared to the corresponding ground state value and this is attributed to the more polar excited state of QSD.Keywords: Dipole moment, Quinine sulphate dication, Solvatochromic shift
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23744167 Dynamic Analysis of Reduced Order Large Rotating Vibro-Impact Systems
Authors: Miroslav Byrtus
Abstract:
Large rotating systems, especially gear drives and gearboxes, occur as parts of many mechanical devices transmitting the torque with relatively small loss of power. With the increased demand for high speed machinery, mathematical modeling and dynamic analysis of gear drives gained importance. Mathematical description of such mechanical systems is a complex task evolving for several decades. In gear drive dynamic models, which include flexible shafts, bearings and gearing and use the finite elements, nonlinear effects due to gear mesh and bearings are usually ignored, for such models have large number of degrees of freedom (DOF) and it is computationally expensive to analyze nonlinear systems with large number of DOF. Therefore, these models are not suitable for simulation of nonlinear behavior with amplitude jumps in frequency response. The contribution uses a methodology of nonlinear large rotating system modeling which is based on degrees of freedom (DOF) number reduction using modal synthesis method (MSM). The MSM enables significant DOF number reduction while keeping the nonlinear behavior of the system in a specific frequency range. Further, the MSM with DOF number reduction is suitable for including detail models of nonlinear couplings (mainly gear and bearing couplings) into the complete gear drive models. Since each subsystem is modeled separately using different FEM systems, it is advantageous to parameterize models of subsystems and to use the parameterization for optimization of chosen design parameters. Final complex model of gear drive is assembled in MATLAB and MATLAB tools are used for dynamical analysis of the nonlinear system. The contribution is further focused on developing of a methodology for investigation of behavior of the system by Nonlinear Normal Modes with combination of the MSM using numerical continuation method. The proposed methodology will be tested using a two-stage gearbox including its housing.
Keywords: Vibro-impact system, rotating system, gear drive, modal synthesis method, numerical continuation method, periodic solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24014166 Extraction of Semantic Digital Signatures from MRI Photos for Image-Identification Purposes
Authors: Marios Poulos, George Bokos
Abstract:
This paper makes an attempt to solve the problem of searching and retrieving of similar MRI photos via Internet services using morphological features which are sourced via the original image. This study is aiming to be considered as an additional tool of searching and retrieve methods. Until now the main way of the searching mechanism is based on the syntactic way using keywords. The technique it proposes aims to serve the new requirements of libraries. One of these is the development of computational tools for the control and preservation of the intellectual property of digital objects, and especially of digital images. For this purpose, this paper proposes the use of a serial number extracted by using a previously tested semantic properties method. This method, with its center being the multi-layers of a set of arithmetic points, assures the following two properties: the uniqueness of the final extracted number and the semantic dependence of this number on the image used as the method-s input. The major advantage of this method is that it can control the authentication of a published image or its partial modification to a reliable degree. Also, it acquires the better of the known Hash functions that the digital signature schemes use and produces alphanumeric strings for cases of authentication checking, and the degree of similarity between an unknown image and an original image.Keywords: Computational Geometry, MRI photos, Image processing, pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15214165 Kalman Filter Based Adaptive Reduction of Motion Artifact from Photoplethysmographic Signal
Authors: S. Seyedtabaii, L. Seyedtabaii
Abstract:
Artifact free photoplethysmographic (PPG) signals are necessary for non-invasive estimation of oxygen saturation (SpO2) in arterial blood. Movement of a patient corrupts the PPGs with motion artifacts, resulting in large errors in the computation of Sp02. This paper presents a study on using Kalman Filter in an innovative way by modeling both the Artillery Blood Pressure (ABP) and the unwanted signal, additive motion artifact, to reduce motion artifacts from corrupted PPG signals. Simulation results show acceptable performance regarding LMS and variable step LMS, thus establishing the efficacy of the proposed method.Keywords: Kalman filter, Motion artifact, PPG, Photoplethysmography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42614164 An Efficient Spam Mail Detection by Counter Technique
Authors: Raheleh Kholghi, Soheil Behnam Roudsari, Alireza Nemaney Pour
Abstract:
Spam mails are unwanted mails sent to large number of users. Spam mails not only consume the network resources, but cause security threats as well. This paper proposes an efficient technique to detect, and to prevent spam mail in the sender side rather than the receiver side. This technique is based on a counter set on the sender server. When a mail is transmitted to the server, the mail server checks the number of the recipients based on its counter policy. The counter policy performed by the mail server is based on some pre-defined criteria. When the number of recipients exceeds the counter policy, the mail server discontinues the rest of the process, and sends a failure mail to sender of the mail; otherwise the mail is transmitted through the network. By using this technique, the usage of network resources such as bandwidth, and memory is preserved. The simulation results in real network show that when the counter is set on the sender side, the time required for spam mail detection is 100 times faster than the time the counter is set on the receiver side, and the network resources are preserved largely compared with other anti-spam mail techniques in the receiver side.Keywords: Anti-spam, Mail server, Sender side, Spam mail
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17694163 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks
Authors: Tripatjot S. Panag, J. S. Dhillon
Abstract:
The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.Keywords: Coverage, disjoint sets, heuristic, lifetime, scheduling, wireless sensor networks, WSN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18414162 Heat Transfer Characteristics and Fluid Flow past Staggered Flat-Tube Bank Using CFD
Authors: Zeinab Sayed Abdel-Rehim
Abstract:
A computational fluid dynamic (CFD-Fluent 6.2) for two-dimensional fluid flow is applied to predict the pressure drop and heat transfer characteristics of laminar and turbulent flow past staggered flat-tube bank. Effect of aspect ratio ((H/D)/(L/D)) on pressure drop, temperature, and velocity contour for laminar and turbulent flow over staggered flat-tube bank is studied. The theoretical results of the present models are compared with previously published experimental data of different authors. Satisfactory agreement is demonstrated. Also, the comparison between the present study and others analytical methods for the Re number with Nu number is done. The results show as the Reynolds number increases the maximum velocity in the passage between the upper and lower tubes increases. The comparisons show a fair agreement especially in the turbulent flow region. The good agreement of the data of this work with these recommended analytical methods validates the current study.
Keywords: Aspect ratio ((H/D)/(L/D)), CFD, fluid flow, heat transfer, staggered arrangement, tube bank, and turbulent flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37534161 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.
Keywords: Collapsible soil, relative subsidence, dielectric permittivity, moisture content.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11174160 Ingenious Use of Hypo Sludge in M25 Concrete
Authors: Abhinandan Singh Gill
Abstract:
Paper mill sludge is one of the major economic and environmental problems for paper and board industry, million tonnes quantity of sludge is produced in the world. It is essential to dispose these wastes safely without affecting health of human being, environment, fertile land; sources of water bodies, economy as it adversely affect the strength, durability and other properties of building materials based on them. Moreover, in developing countries like India where there is low availability of non-renewable resources and large need of building material like cement therefore it is essential to develop eco-efficient utilization of paper sludge. Primarily in functional terms paper sludge comprises of cellulose fibers, calcium carbonate, china clay, low silica, residual chemical bonds with water. The material is sticky and full of moisture content which is hard to dry. The manufacturing of paper usually produce loads of solid waste. These paper fibers are recycled in paper mills to limited number of times till they become weak to produce high quality paper. Thereafter, these left out small and weak pieces called as low quality paper fibers are detached out to become paper sludge. The material is by-product of de-inking and re-pulping of paper. This hypo sludge includes all kinds of inks, dyes, coating etc inscribed on the paper. This paper presents an overview of the published work on the use of hypo sludge in M25 concrete formulations as a supplementary cementitious material exploring its properties such as compressive strength, splitting and parameters like modulus of elasticity, density, applications and most importantly investigation of low cost concrete by using hypo sludge are presented.
Keywords: Concrete, sludge waste, hypo sludge, supplementary cementitious material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12644159 Variable Guard Channels for Efficient Traffic Management
Authors: G. M. Mir, N. A. Shah, Moinuddin
Abstract:
Guard channels improve the probability of successful handoffs by reserving a number of channels exclusively for handoffs. This concept has the risk of underutilization of radio spectrum due to the fact that fewer channels are granted to originating calls even if these guard channels are not always used, when originating calls are starving for the want of channels. The penalty is the reduction of total carried traffic. The optimum number of guard channels can help reduce this problem. This paper presents fuzzy logic based guard channel scheme wherein guard channels are reorganized on the basis of traffic density, so that guard channels are provided on need basis. This will help in incorporating more originating calls and hence high throughput of the radio spectrumKeywords: Free channels, fuzzy logic, guard channels, Handoff
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13104158 New Design Constraints of FIR Filter on Magnitude and Phase of Error Function
Authors: Raghvendra Kumar, Lillie Dewan
Abstract:
Exchange algorithm with constraints on magnitude and phase error separately in new way is presented in this paper. An important feature of the algorithms presented in this paper is that they allow for design constraints which often arise in practical filter design problems. Meeting required minimum stopband attenuation or a maximum deviation from the desired magnitude and phase responses in the passbands are common design constraints that can be handled by the methods proposed here. This new algorithm may have important advantages over existing technique, with respect to the speed and stability of convergence, memory requirement and low ripples.
Keywords: Least square estimation, Constraints, Exchange algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16544157 Evaluation of Ultrasonic C-Scan Images by Fractal Dimension
Authors: S. Samanta, D. Datta, S. S. Gautam
Abstract:
In this paper, quantitative evaluation of ultrasonic Cscan images through estimation of their Fractal Dimension (FD) is discussed. Necessary algorithm for evaluation of FD of any 2-D digitized image is implemented by developing a computer code. For the evaluation purpose several C-scan images of the Kevlar composite impacted by high speed bullet and glass fibre composite having flaw in the form of inclusion is used. This analysis automatically differentiates a C-scan image showing distinct damage zone, from an image that contains no such damage.Keywords: C-scan, Impact, Fractal Dimension, Kevlar composite and Inclusion Flaw
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17444156 Extended Least Squares LS–SVM
Authors: József Valyon, Gábor Horváth
Abstract:
Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. In this paper we present an extended view of the Least Squares Support Vector Regression (LS–SVR), which enables us to develop new formulations and algorithms to this regression technique. Based on manipulating the linear equation set -which embodies all information about the regression in the learning process- some new methods are introduced to simplify the formulations, speed up the calculations and/or provide better results.Keywords: Function estimation, Least–Squares Support VectorMachines, Regression, System Modeling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20094155 Robust Face Recognition Using Eigen Faces and Karhunen-Loeve Algorithm
Authors: Parvinder S. Sandhu, Iqbaldeep Kaur, Amit Verma, Prateek Gupta
Abstract:
The current research paper is an implementation of Eigen Faces and Karhunen-Loeve Algorithm for face recognition. The designed program works in a manner where a unique identification number is given to each face under trial. These faces are kept in a database from where any particular face can be matched and found out of the available test faces. The Karhunen –Loeve Algorithm has been implemented to find out the appropriate right face (with same features) with respect to given input image as test data image having unique identification number. The procedure involves usage of Eigen faces for the recognition of faces.Keywords: Eigen Faces, Karhunen-Loeve Algorithm, FaceRecognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17384154 Comparison Analysis of the Wald-s and the Bayes Type Sequential Methods for Testing Hypotheses
Authors: K. J. Kachiashvili
Abstract:
The Comparison analysis of the Wald-s and Bayestype sequential methods for testing hypotheses is offered. The merits of the new sequential test are: universality which consists in optimality (with given criteria) and uniformity of decision-making regions for any number of hypotheses; simplicity, convenience and uniformity of the algorithms of their realization; reliability of the obtained results and an opportunity of providing the errors probabilities of desirable values. There are given the Computation results of concrete examples which confirm the above-stated characteristics of the new method and characterize the considered methods in regard to each other.
Keywords: Errors of types I and II, likelihood ratio, the Bayes Type Sequential test, the Wald's sequential test, averaged number of observations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17184153 Simple and Advanced Models for Calculating Single-Phase Diode Rectifier Line-Side Harmonics
Authors: Hussein A. Kazem, Abdulhakeem Abdullah Albaloshi, Ali Said Ali Al-Jabri, Khamis Humaid AlSaidi
Abstract:
This paper proposes different methods for estimation of the harmonic currents of the single-phase diode bridge rectifier. Both simple and advanced methods are compared and the models are put into a context of practical use for calculating the harmonic distortion in a typical application. Finally, the different models are compared to measurements of a real application and convincing results are achieved.Keywords: Single-phase rectifier, line side Harmonics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46374152 Use of Hair as an Indicator of Environmental Lead Pollution: Changes after Twenty Years of Phasing Out Leaded Gasoline
Authors: M. A. Abou Donia, A. A. K. Abou-Arab, Nevin E. Sharaf, A. K. Enab, Sherif R. Mohamed
Abstract:
Lead (Pb) poisoning is one of the most common and preventable environmental health problems. There are different sources of environmental pollution with lead as lead alkyl additives in petrol and manufacturing processes. Pb in the atmosphere can be deposited in urban soils, and may then be re-suspended to re-enter the atmosphere. This could increase human exposure to Pb and cause long-term health effects. Thus, monitoring Pb pollution is considered one of the major tasks in controlling pollution. Scalp hair can be utilized for the determination of lead (Pb) concentration. It provides a lasting record of metal intakes of weeks or even months, and for most metals, their accumulation in hair reflects their accumulation in the whole body. This work was conducted to investigate the concentration of lead in male scalp hair of Cairo (residential-traffic and residential-industrial) and rural residents after twenty years of phasing out of leaded gasoline. Results indicated that the mean concentration of lead in hair of residential-traffic (9.7552 μg/g ±0.71) and residential-industrial (12.3288 μg/g ±1.13) was significantly higher than that in rural residents (4.7327 μg/g ±0.67). The mean concentration of lead in hair of resident’s industrial areas was the highest among Cairo residents and not the traffic areas as it was before phasing out of leaded gasoline. Twenty years of phasing out of leaded gasoline in Cairo has greatly improved the lead pollution among residents of traffic areas, but industrial areas residents were still suffering from lead pollution, which needs more efforts to control the sources of lead pollution.
Keywords: Heavy metals, lead, hair, biological sample, urban pollution, rural pollution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17634151 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption
Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif
Abstract:
Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.Keywords: Battery endurance, software metrics, mobile application, power consumption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943