Search results for: average symbol error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4836

Search results for: average symbol error rate

3876 Hot-Spot Blob Merging for Real-Time Image Segmentation

Authors: K. Kraus, M. Uiberacker, O. Martikainen, R. Reda

Abstract:

One of the major, difficult tasks in automated video surveillance is the segmentation of relevant objects in the scene. Current implementations often yield inconsistent results on average from frame to frame when trying to differentiate partly occluding objects. This paper presents an efficient block-based segmentation algorithm which is capable of separating partly occluding objects and detecting shadows. It has been proven to perform in real time with a maximum duration of 47.48 ms per frame (for 8x8 blocks on a 720x576 image) with a true positive rate of 89.2%. The flexible structure of the algorithm enables adaptations and improvements with little effort. Most of the parameters correspond to relative differences between quantities extracted from the image and should therefore not depend on scene and lighting conditions. Thus presenting a performance oriented segmentation algorithm which is applicable in all critical real time scenarios.

Keywords: Image segmentation, Model-based, Region growing, Blob Analysis, Occlusion, Shadow detection, Intelligent videosurveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
3875 Effect of Dietary Supplementation of Different Levels of Black Seed (Nigella Sativa L.) on Growth Performance, Immunological, Hematological and Carcass Parameters of Broiler Chicks

Authors: R. S. Shewita, A. E. Taha

Abstract:

This experiment was conducted to investigate the effect of dietary supplementation of different levels of black seed (Nigella sativa L.) on the performance and immune response of broiler chicks. A total 240 day-old broiler chicks were used and randomly allotted equally into six experimental groups designated as 1, 2, 3, 4, 5 and 6 having black seed at the rate of 0, 2, 4, 6, 8 and 10 g /kg diet respectively. The study was lasted for 42 days. Average body weight, weight gain, relative growth rate, feed conversion, antibody titer against Newcastle disease, phagocytic activity and phagocytic index, some blood parameters(GOT, GPT, Glucose, Cholesterol, Triglyceride, Total protein, Albumen, WBCs, RBCs, Hb and PCV), dressing percentage, weight of different body organs, abdominal fat weight, were determined. It was found that, N. Sativa significantly improved final body weight, total body gain and feed conversion ratio of groups 2 and 3 when compared with the control group. Higher levels of N. Sativa did not improve growth performance of the chicks. Non significant differences were observed for antibody titer against Newcastle virus, WBCs count, serum GOT, glucose level, dressing %, relative liver, spleen, heart and head percentages. Lymphoid organs (Bursa and Thymus) improved significantly with increasing N. Sativa level in all supplemented groups. Serum cholesterol, triglyceride and visible fat % significantly decreased with Nigella sativa supplementation while serum GPT level significantly increased with nigella sativa supplementation.

Keywords: Nigella Sativa, broiler, growth, carcass traits, serum, blood

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3055
3874 Analysis of Precipitation Time Series of Urban Centers of Northeastern Brazil using Wavelet Transform

Authors: Celso A. G. Santos, Paula K. M. M. Freire

Abstract:

The urban centers within northeastern Brazil are mainly influenced by the intense rainfalls, which can occur after long periods of drought, when flood events can be observed during such events. Thus, this paper aims to study the rainfall frequencies in such region through the wavelet transform. An application of wavelet analysis is done with long time series of the total monthly rainfall amount at the capital cities of northeastern Brazil. The main frequency components in the time series are studied by the global wavelet spectrum and the modulation in separated periodicity bands were done in order to extract additional information, e.g., the 8 and 16 months band was examined by an average of all scales, giving a measure of the average annual variance versus time, where the periods with low or high variance could be identified. The important increases were identified in the average variance for some periods, e.g. 1947 to 1952 at Teresina city, which can be considered as high wet periods. Although, the precipitation in those sites showed similar global wavelet spectra, the wavelet spectra revealed particular features. This study can be considered an important tool for time series analysis, which can help the studies concerning flood control, mainly when they are applied together with rainfall-runoff simulations.

Keywords: rainfall data, urban center, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2433
3873 Developing Damage Assessment Model for Bridge Surroundings: A Study of Disaster by Typhoon Morakot in Taiwan

Authors: Jieh-Haur Chen, Pei-Fen Huang

Abstract:

This paper presents an integrated model that automatically measures the change of rivers, damage area of bridge surroundings, and change of vegetation. The proposed model is on the basis of a neurofuzzy mechanism enhanced by SOM optimization algorithm, and also includes three functions to deal with river imagery. High resolution imagery from FORMOSAT-2 satellite taken before and after the invasion period is adopted. By randomly selecting a bridge out of 129 destroyed bridges, the recognition results show that the average width has increased 66%. The ruined segment of the bridge is located exactly at the most scour region. The vegetation coverage has also reduced to nearly 90% of the original. The results yielded from the proposed model demonstrate a pinpoint accuracy rate at 99.94%. This study brings up a successful tool not only for large-scale damage assessment but for precise measurement to disasters.

Keywords: remote sensing image, damage assessment, typhoon disaster, bridge, ANN, fuzzy, SOM, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666
3872 Achievements of Healthcare Services Vis-À-Vis the Millennium Development Goals Targets: Evidence from Pakistan

Authors: Saeeda Batool, Ather Maqsood Ahmed

Abstract:

This study investigates the impact of public healthcare facilities and socio-economic circumstances on the status of child health in Pakistan. The complete analysis is carried out in correspondence with fourth and sixth millennium development goals. Further, the health variables chosen are also inherited from targeted indicators of the mentioned goals (MDGs). Trends in the Human Opportunity Index (HOI) for both health inequalities and coverage are analyzed using the Pakistan Social and Living Standards Measurement (PLSM) data set for 2001-02 to 2012-13 at the national and provincial level. To reveal the relative importance of each circumstance in achieving the targeted values for child health, Shorrocks decomposition is applied on HOI. The annual point average growth rate of HOI is used to simulate the time period for the achievement of target set by MDGs and universal access also. The results indicate an improvement in HOI for a reduction in child mortality rates from 52.1% in 2001-02 to 67.3% in 2012-13, which confirms the availability of healthcare opportunities to a larger segment of society. Similarly, immunization against measles and other diseases such as Diphtheria, Polio, Bacillus Calmette-Guerin (BCG), and Hepatitis has also registered an improvement from 51.6% to 69.9% during the period of study at the national level. On a positive note, no gender disparity has been found for child health indicators and that health outcome is mostly affected by the parental and geographical features and availability of health infrastructure. However, the study finds that this achievement has been uneven across provinces. Pakistan is not only lagging behind in achieving its health goals, disappointingly with the current rate of health care provision, but it will take many additional years to achieve its targets.

Keywords: Socio-economic circumstances, unmet MDGs, public healthcare services, child and infant mortality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1108
3871 Utilization Juice Wastes as Corn Replacement in the Broiler Diet

Authors: Yose Rizal, Maria Endo Mahata, Mira Andriani, Guoyao Wu

Abstract:

An experiment was conducted with 80 unsexed broilers of the Arbor Acress strain to determine the capability of a carrot and fruit juice wastes mixture (carrot, apple, manggo, avocado, orange, melon and Dutch egg plant) in the same proportion for replacing corn in broiler diet. This study involved a completely randomized design (CRD) with 5 treatments (0, 5, 10, 15, and 20% of juice wastes mixture in diets) and 4 replicates per treatment. Diets were isonitrogenous (22% crude protein) and isocaloric (3000 kcal/kg diet). Measured variables were feed consumption, average daily gain, feed conversion, as well as percentages of abdominal fat pad, carcass, digestive organs (liver, pancreas and gizzard), and heart. Data were analyzed by analysis of variance for CRD. Increasing juice wastes mixture levels in diets increased feed consumption (P<0.05) and average daily gain (P<0.01), while improving feed utilization efficiency (P<0.05). These treatments also affected (P<0.05) abdominal fat pad percentage but had no effect (P>0.05) on carcass, liver, pancreas, gizzard or heart percentages. In conclusion, up to 20% of juice wastes mixture could be included for the broiler diet to effectively replace up to 40% corn in the diet.

Keywords: average daily gain, feed consumption, feedconversion, juice waste mixture

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
3870 Heavy Metals in Marine Sediments of Gulf of Izmir

Authors: E. Kam, Z. U. Yümün, D. Kurt

Abstract:

In this study, sediment samples were collected from four sampling sites located on the shores of the Gulf of İzmir. In the samples, Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn concentrations were determined using inductively coupled, plasma-optical emission spectrometry (ICP-OES). The average heavy metal concentrations were: Cd < LOD (limit of detection); Co 14.145 ± 0.13 μg g−1; Cr 112.868 ± 0.89 μg g−1; Cu 34.045 ± 0.53 μg g−1; Mn 481.43 ± 7.65 μg g−1; Ni 76.538 ± 3.81 μg g−1; Pb 11.059 ± 0.53 μg g−1 and Zn 140.133 ± 1.37 μg g−1, respectively. The results were compared with the average abundances of these elements in the Earth’s crust. The measured heavy metal concentrations can serve as reference values for further studies carried out on the shores of the Aegean Sea.

Keywords: Heavy metal, Aegean Sea, ICP-OES, sediment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816
3869 Multi-Rate Exact Discretization based on Diagonalization of a Linear System - A Multiple-Real-Eigenvalue Case

Authors: T. Sakamoto, N. Hori

Abstract:

A multi-rate discrete-time model, whose response agrees exactly with that of a continuous-time original at all sampling instants for any sampling periods, is developed for a linear system, which is assumed to have multiple real eigenvalues. The sampling rates can be chosen arbitrarily and individually, so that their ratios can even be irrational. The state space model is obtained as a combination of a linear diagonal state equation and a nonlinear output equation. Unlike the usual lifted model, the order of the proposed model is the same as the number of sampling rates, which is less than or equal to the order of the original continuous-time system. The method is based on a nonlinear variable transformation, which can be considered as a generalization of linear similarity transformation, which cannot be applied to systems with multiple eigenvalues in general. An example and its simulation result show that the proposed multi-rate model gives exact responses at all sampling instants.

Keywords: Multi-rate discretization, linear systems, triangularization, similarity transformation, diagonalization, exponential transformation, multiple eigenvalues

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347
3868 Experimental Test of a Combined Machine that Evenly Distributes Fertilizer under the Soil on Slopes

Authors: Qurbanov Huseyn Nuraddin

Abstract:

The results of scientific research on a machine that pours an equal amount of mineral fertilizer under the soil to increase the productivity of grain in mountain farming and obtain quality grain are substantiated. The average yield of the crop depends on the nature of the distribution of fertilizers in the soil. Therefore, the study of effective energy-saving methods for the application of mineral fertilizers is the actual task of modern agriculture. Depending on the type and variety of plants in mountain farming, there is an optimal norm of mineral fertilizers. Applying an equal amount of fertilizer to the soil is one of the conditions that increase the efficiency of the field. One of the main agro-technical indicators of the work of mineral fertilizing machines is to ensure equal distribution of mineral fertilizers in the field. Taking into account the above-mentioned issues, a combined plough has been improved in our laboratory.

Keywords: Combined plough, mineral fertilizers, sprinkle fluently, fertilizer rate, cereals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 367
3867 A New Preconditioned AOR Method for Z-matrices

Authors: Guangbin Wang, Ning Zhang, Fuping Tan

Abstract:

In this paper, we present a preconditioned AOR-type iterative method for solving the linear systems Ax = b, where A is a Z-matrix. And give some comparison theorems to show that the rate of convergence of the preconditioned AOR-type iterative method is faster than the rate of convergence of the AOR-type iterative method.

Keywords: Z-matrix, AOR-type iterative method, precondition, comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1532
3866 Influence of Tool Geometry on Surface Roughness and Tool Wear When Turning AISI 304L Using Taguchi Optimisation Methodology

Authors: Salah Gariani, Taher Dao, Ahmed Lajili

Abstract:

This paper presents an experimental optimisation of surface roughness (Ra) and tool wear in the precision turning of AISI 304L alloy using a wiper and conventional cutting tools under wet cutting conditions. The machining trials were conducted based on Taguchi methodology employing an L9 orthogonal array design with four process parameters: feed rate, spindle speed, depth of cut, and cutting tool type. The experimental results were utilised to characterise the main factors affecting Ra and tool wear using the analyses of means (AOM) and variance (ANOVA). The results show that the wiper tools outperformed conventional tools in terms of surface quality and tool wear at optimal cutting conditions. The ANOVA results indicate that the main factors contributing to lower Ra are cutting tool type and feed rate, with percentage contribution ratios (PCRs) of 58.69% and 25.18% respectively. This confirms that tool type is the most significant factor affecting surface quality when turning AISI 304L. Additionally, a substantial reduction in tool wear was observed when a wiper insert was used, whereas noticeable increases in tool wear occurred when higher cutting speeds were employed for both tool types. These trends confirm the ANOVA outcomes that cutting speed has a significant effect on tool wear, with a PCR value of 39.22%, followed by tool type with a PCR of 27.40%. All machining trials generated similar continuous spiral or curl-shaped chips. A noticeable difference was found in the radius of the produced curl-shaped chips at different cutting speeds when turning AISI 304L under wet cutting conditions.

Keywords: AISI 304L alloy, conventional and wiper carbide tools, wet turning, average surface roughness, tool wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127
3865 Q-Test of Undergraduate Epistemology and Scientific Thought: Development and Testing of an Assessment of Scientific Epistemology

Authors: Matthew J. Zagumny

Abstract:

The QUEST is an assessment of scientific epistemic beliefs and was developed to measure students’ intellectual development in regards to beliefs about knowledge and knowing. The QUEST utilizes Q-sort methodology, which requires participants to rate the degree to which statements describe them personally. As a measure of personal theories of knowledge, the QUEST instrument is described with the Q-sort distribution and scoring explained. A preliminary demonstration of the QUEST assessment is described with two samples of undergraduate students (novice/lower division compared to advanced/upper division students) being assessed and their average QUEST scores compared. The usefulness of an assessment of epistemology is discussed in terms of the principle that assessment tends to drive educational practice and university mission. The critical need for university and academic programs to focus on development of students’ scientific epistemology is briefly discussed.

Keywords: Scientific epistemology, critical thinking, Q-sort method, STEM undergraduates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1515
3864 Formal Analysis of a Public-Key Algorithm

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.

Keywords: public-key encryption, Rabin public-key scheme, formalproof system, higher-order logic, formal verification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
3863 Efficiency Evaluation of E-Commerce Websites

Authors: A. K. Abd El-Aleem, W. F. Abd El-wahed, N. A. Ismail, F. A. Torkey

Abstract:

This study suggests a model of a new set of evaluation criteria that will be used to measure the efficiency of real-world E-commerce websites. Evaluation criteria include design, usability and performance for websites, the Data Envelopment Analysis (DEA) technique has been used to measure the websites efficiency. An efficient Web site is defined as a site that generates the most outputs, using the smallest amount of inputs. Inputs refer to measurements representing the amount of effort required to build, maintain and perform the site. Output is amount of traffic the site generates. These outputs are measured as the average number of daily hits and the average number of daily unique visitors.

Keywords: Data Envelopment Analysis, E-commerce, Efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4081
3862 Application of Extreme Learning Machine Method for Time Series Analysis

Authors: Rampal Singh, S. Balasundaram

Abstract:

In this paper, we study the application of Extreme Learning Machine (ELM) algorithm for single layered feedforward neural networks to non-linear chaotic time series problems. In this algorithm the input weights and the hidden layer bias are randomly chosen. The ELM formulation leads to solving a system of linear equations in terms of the unknown weights connecting the hidden layer to the output layer. The solution of this general system of linear equations will be obtained using Moore-Penrose generalized pseudo inverse. For the study of the application of the method we consider the time series generated by the Mackey Glass delay differential equation with different time delays, Santa Fe A and UCR heart beat rate ECG time series. For the choice of sigmoid, sin and hardlim activation functions the optimal values for the memory order and the number of hidden neurons which give the best prediction performance in terms of root mean square error are determined. It is observed that the results obtained are in close agreement with the exact solution of the problems considered which clearly shows that ELM is a very promising alternative method for time series prediction.

Keywords: Chaotic time series, Extreme learning machine, Generalization performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3502
3861 DHT-LMS Algorithm for Sensorineural Loss Patients

Authors: Sunitha S. L., V. Udayashankara

Abstract:

Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.

Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1704
3860 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models

Authors: Ramin Vafadary, Maryam Khanbaghi

Abstract:

Forecasting electricity load is important for various purposes like planning, operation and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria namely, the Mean Absolute Error and Root Mean Square Error. The National Renewable Energy Laboratory (NREL) residential energy consumption data are used to train the models. The results of this study show that SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts we can improve the robustness of the models for 24 hour ahead electricity load forecasting.

Keywords: Bagging, Fbprophet, Holt-Winters, LSTM, Load Forecast, SARIMA, tensorflow probability, time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 452
3859 A New Approach to Design an Efficient CIC Decimator Using Signed Digit Arithmetic

Authors: Vishal Awasthi, Krishna Raj

Abstract:

Any digital processing performed on a signal with larger nyquist interval requires more computation than signal processing performed on smaller nyquist interval. The sampling rate alteration generates the unwanted effects in the system such as spectral aliasing and spectral imaging during signal processing. Multirate-multistage implementation of digital filter can result a significant computational saving than single rate filter designed for sample rate conversion. In this paper, we presented an efficient cascaded integrator comb (CIC) decimation filter that perform fast down sampling using signed digit adder algorithm with compensated frequency droop that arises due to aliasing effect during the decimation process. This proposed compensated CIC decimation filter structure with a hybrid signed digit (HSD) fast adder provide an improved performance in terms of down sampling speed by 65.15% than ripple carry adder (RCA) and reduced area and power by 57.5% and 0.01 % than signed digit (SD) adder algorithms respectively.

Keywords: Sampling rate conversion, Multirate Filtering, Compensation Theory, Decimation filter, CIC filter, Redundant signed digit arithmetic, Fast adders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4881
3858 Study of Flow Behavior of Aqueous Solution of Rhodamine B in Annular Reactor Using Computational Fluid Dynamics

Authors: Jatinder Kumar, Ajay Bansal

Abstract:

The present study deals with the modeling and simulation of flow through an annular reactor at different hydrodynamic conditions using computational fluid dynamics (CFD) to investigate the flow behavior. CFD modeling was utilized to predict velocity distribution and average velocity in the annular geometry. The results of CFD simulations were compared with the mathematically derived equations and already developed correlations for validation purposes. CFD modeling was found suitable for predicting the flow characteristics in annular geometry under laminar flow conditions. It was observed that CFD also provides local values of the parameters of interest in addition to the average values for the simulated geometry.

Keywords: Annular reactor, computational fluid dynamics (CFD), hydrodynamics, Rhodamine B

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
3857 Construct the Fur Input Mixed Model with Activity-Based Benefit Assessment Approach of Leather Industry

Authors: M. F. Wu, F. T. Cheng

Abstract:

Leather industry is the most important traditional industry to provide the leather products in the world for thousand years. The fierce global competitive environment and common awareness of global carbon reduction make livestock supply quantities falling, salt and wet blue leather material reduces and the price skyrockets significantly. Exchange rate fluctuation led sales revenue decreasing which due to the differences of export exchanges and compresses the overall profitability of leather industry. This paper applies activity-based benefit assessment approach to build up fitness fur input mixed model, fur is Wet Blue, which concerned with four key factors: the output rate of wet blue, unit cost of wet blue, yield rate and grade level of Wet Blue to achieve the low cost strategy under given unit price of leather product condition of the company. The research findings indicate that applying this model may improve the input cost structure, decrease numbers of leather product inventories and to raise the competitive advantages of the enterprise in the future.

Keywords: Activity-Based Benefit Assessment Approach, Input mixed, Output Rate, Wet Blue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1649
3856 A Finite Precision Block Floating Point Treatment to Direct Form, Cascaded and Parallel FIR Digital Filters

Authors: Abhijit Mitra

Abstract:

This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.

Keywords: Finite impulse response digital filters, Cascade structure, Parallel structure, Block floating point arithmetic, Roundoff error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
3855 Software Maintenance Severity Prediction for Object Oriented Systems

Authors: Parvinder S. Sandhu, Roma Jaswal, Sandeep Khimta, Shailendra Singh

Abstract:

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Keywords: Neural Network, Software faults, Software Metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
3854 Comparison between Torsional Ultrasonic Assisted Drilling and Conventional Drilling of Bone: An in vitro Study

Authors: Nikoo Soleimani

Abstract:

Background: Reducing torque during bone drilling is one of the effective factors in reaching to an optimal drilling process. Methods: 15 bovine femurs were drilled in vitro with a drill bit with a diameter of 4 mm using two methods of torsional ultrasonic assisted drilling (T-UAD) and convent conventional drilling (CD) and the effects of changing the feed rate and rotational speed on the torque were compared in both methods. Results: There was no significant difference in the thrust force measured in both methods due to the direction of vibrations. Results showed that using T-UAD method for bone drilling at feed rates of 0.16, 0.24 and 0.32 mm/rev led for all rotational speeds to a decrease of at least 16.3% in torque compared to the CD method. Further, using T-UAD at rotational speeds of 355~1000 rpm with various feed rates resulted in a torque reduction of 16.3~50.5% compared to CD method. Conclusions: Reducing the feed rate and increasing the rotational speed, except for the rotational speed of 500 rpm and a feed rate of 0.32 mm/rev, resulted generally in torque reduction in both methods. However, T-UAD is a more effective and desirable option for bone drilling considering its significant torque reduction.

Keywords: Torsional ultrasonic assisted drilling, torque, bone drilling, rotational speed, feed rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624
3853 Interstate Comparison of Environmental Performance using Stochastic Frontier Analysis: The United States Case Study

Authors: Alexander Y. Vaninsky

Abstract:

Environmental performance of the U.S. States is investigated for the period of 1990 – 2007 using Stochastic Frontier Analysis (SFA). The SFA accounts for both efficiency measure and stochastic noise affecting a frontier. The frontier is formed using indicators of GDP, energy consumption, population, and CO2 emissions. For comparability, all indicators are expressed as ratios to total. Statistical information of the Energy Information Agency of the United States is used. Obtained results reveal the bell - shaped dynamics of environmental efficiency scores. The average efficiency scores rise from 97.6% in 1990 to 99.6% in 1999, and then fall to 98.4% in 2007. The main factor is insufficient decrease in the rate of growth of CO2 emissions with regards to the growth of GDP, population and energy consumption. Data for 2008 following the research period allow for an assumption that the environmental performance of the U.S. States has improved in the last years.

Keywords: Stochastic frontier analysis, environmental performance, interstate comparisons.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
3852 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method

Authors: M. T. Tsepav, Y. Adamu, M. A. Umar

Abstract:

A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses, and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.

Keywords: Geoelectric survey, corrosivity, protective capacity, transmissivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227
3851 The Net as a Living Experience of Distance Motherhood within Italian Culture

Authors: C. Papapicco

Abstract:

Motherhood is an existential human relationship that lasts for the whole life and is always interwoven with subjectivity and culture. As a result of the brain drain, the motherhood becomes motherhood at distance. Starting from the hypothesis that re-signification of the mother at distance practices is culturally relevant; the research aims to understand the experience of mother at a distance in order to extrapolate the strategies of management of the empty nest. Specifically, the research aims to evaluate the experience of a brain drain’s mother, who created a blog that intends to take care of other parents at a distance. Actually, the blog is the only artifact symbol of the Italian culture of motherhood at distance. In the research, a Netnographic Analysis of the blog mammedicervelliinfuga.com is offered with the aim of understanding if the online world becomes an opportunity to manage the role of mother at a distance. A narrative interview with the blog creator was conducted and then the texts were analyzed by means of a Diatextual Analysis approach. It emerged that the migration projects of talented children take on different meanings and representations for parents. Thus, it is shown that the blog becomes a new form of understanding and practicing motherhood at a distance.

Keywords: Brain drain, diatextual analysis, distance motherhood blog, online and offline narrations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444
3850 Porous Particles Drying in a Vertical Upward Pneumatic Conveying Dryer

Authors: Samy M. El-Behery, W. A. El-Askary, K. A. Ibrahim, Mofreh H. Hamed

Abstract:

A steady two-phase flow model has been developed to simulate the drying process of porous particle in a pneumatic conveying dryer. The model takes into account the momentum, heat and mass transfer between the continuous phase and the dispersed phase. A single particle model was employed to calculate the evaporation rate. In this model the pore structure is simplified to allow the dominant evaporation mechanism to be readily identified at all points within the duct. The predominant mechanism at any time depends upon the pressure, temperature and the diameter of pore from which evaporating is occurring. The model was validated against experimental studies of pneumatic transport at low and high speeds as well as pneumatic drying. The effects of operating conditions on the dryer parameters are studied numerically. The present results show that the drying rate is enhanced as the inlet gas temperature and the gas flow rate increase and as the solid mass flow rate deceases. The present results also demonstrate the necessity of measuring the inlet gas velocity or the solid concentration in any experimental analysis.

Keywords: Two-phase, gas-solid, pneumatic drying, pneumatic conveying, heat and mass transfer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3254
3849 On Bayesian Analysis of Failure Rate under Topp Leone Distribution using Complete and Censored Samples

Authors: N. Feroze, M. Aslam

Abstract:

The article is concerned with analysis of failure rate (shape parameter) under the Topp Leone distribution using a Bayesian framework. Different loss functions and a couple of noninformative priors have been assumed for posterior estimation. The posterior predictive distributions have also been derived. A simulation study has been carried to compare the performance of different estimators. A real life example has been used to illustrate the applicability of the results obtained. The findings of the study suggest  that the precautionary loss function based on Jeffreys prior and singly type II censored samples can effectively be employed to obtain the Bayes estimate of the failure rate under Topp Leone distribution.

Keywords: loss functions, type II censoring, posterior distribution, Bayes estimators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2540
3848 Mathematical Analysis of EEG of Patients with Non-fatal Nonspecific Diffuse Encephalitis

Authors: Mukesh Doble, Sunil K Narayan

Abstract:

Diffuse viral encephalitis may lack fever and other cardinal signs of infection and hence its distinction from other acute encephalopathic illnesses is challenging. Often, the EEG changes seen routinely are nonspecific and reflect diffuse encephalopathic changes only. The aim of this study was to use nonlinear dynamic mathematical techniques for analyzing the EEG data in order to look for any characteristic diagnostic patterns in diffuse forms of encephalitis.It was diagnosed on clinical, imaging and cerebrospinal fluid criteria in three young male patients. Metabolic and toxic encephalopathies were ruled out through appropriate investigations. Digital EEGs were done on the 3rd to 5th day of onset. The digital EEGs of 5 male and 5 female age and sex matched healthy volunteers served as controls.Two sample t-test indicated that there was no statistically significant difference between the average values in amplitude between the two groups. However, the standard deviation (or variance) of the EEG signals at FP1-F7 and FP2-F8 are significantly higher for the patients than the normal subjects. The regularisation dimension is significantly less for the patients (average between 1.24-1.43) when compared to the normal persons (average between 1.41-1.63) for the EEG signals from all locations except for the Fz-Cz signal. Similarly the wavelet dimension is significantly less (P = 0.05*) for the patients (1.122) when compared to the normal person (1.458). EEGs are subdued in the case of the patients with presence of uniform patterns, manifested in the values of regularisation and wavelet dimensions, when compared to the normal person, indicating a decrease in chaotic nature.

Keywords: Chaos, Diffuse encephalitis, Electroencephalogram, Fractal dimension, Fourier spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
3847 ROC Analysis of PVC Detection Algorithm using ECG and Vector-ECG Charateristics

Authors: J. S. Nah, A. Y. Jeon, J. H. Ro, G. R. Jeon

Abstract:

ECG analysis method was developed using ROC analysis of PVC detecting algorithm. ECG signal of MIT-BIH arrhythmia database was analyzed by MATLAB. First of all, the baseline was removed by median filter to preprocess the ECG signal. R peaks were detected for ECG analysis method, and normal VCG was extracted for VCG analysis method. Four PVC detecting algorithm was analyzed by ROC curve, which parameters are maximum amplitude of QRS complex, width of QRS complex, r-r interval and geometric mean of VCG. To set cut-off value of parameters, ROC curve was estimated by true-positive rate (sensitivity) and false-positive rate. sensitivity and false negative rate (specificity) of ROC curve calculated, and ECG was analyzed using cut-off value which was estimated from ROC curve. As a result, PVC detecting algorithm of VCG geometric mean have high availability, and PVC could be detected more accurately with amplitude and width of QRS complex.

Keywords: Vectorcardiogram (VCG), Premature Ventricular contraction (PVC), ROC (receiver operating characteristic) curve, ECG

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2926