Search results for: System performance
1437 The Key Challenges of the New Bank Regulations
Authors: Petr Teply
Abstract:
The New Basel Capital Accord (Basel II) influences how financial institutions around the world, and especially European Union institutions, determine the amount of capital to reserve. However, as the recent global crisis has shown, the revision of Basel II is needed to reflect current trends, such as increased volatility and correlation, in the world financial markets. The overall objective of Basel II is to increase the safety and soundness of the international financial system. Basel II builds on three main pillars: Pillar I deals with the minimum capital requirements for credit, market and operational risk, Pillar II focuses on the supervisory review process and finally Pillar III promotes market discipline through enhanced disclosure requirements for banks. The aim of this paper is to provide the historical background, key features and impact of Basel II on financial markets. Moreover, we discuss new proposals for international bank regulation (sometimes referred to as Basel III) which include requirements for higher quality, constituency and transparency of banks' capital and risk management, regulation of OTC markets and introduction of new liquidity standards for internationally active banks.
Keywords: Basel II, Basel III, risk management, bank regulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20741436 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation
Authors: S. K. Pillai, M. K. Jeyakumar
Abstract:
Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.
Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20441435 Behavior of Generated Gas in Lost Foam Casting
Authors: M. Khodai, S. M. H. Mirbagheri
Abstract:
In the Lost Foam Casting process, melting point temperature of metal, as well as volume and rate of the foam degradation have significant effect on the mold filling pattern. Therefore, gas generation capacity and gas gap length are two important parameters for modeling of mold filling time of the lost foam casting processes. In this paper, the gas gap length at the liquidfoam interface for a low melting point (aluminum) alloy and a high melting point (Carbon-steel) alloy are investigated by the photography technique. Results of the photography technique indicated, that the gas gap length and the mold filling time are increased with increased coating thickness and density of the foam. The Gas gap lengths measured in aluminum and Carbon-steel, depend on the foam density, and were approximately 4-5 and 25-60 mm, respectively. By using a new system, the gas generation capacity for the aluminum and steel was measured. The gas generation capacity measurements indicated that gas generation in the Aluminum and Carbon-steel lost foam casting was about 50 CC/g and 3200 CC/g polystyrene, respectively.Keywords: gas gap, lost foam casting, photographytechnique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35011434 An Improved Quality Adaptive Rate Filtering Technique Based on the Level Crossing Sampling
Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin
Abstract:
Mostly the systems are dealing with time varying signals. The Power efficiency can be achieved by adapting the system activity according to the input signal variations. In this context an adaptive rate filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by following the input signal local variations. Thus, it correlates the processing activity with the signal variations. Interpolation is required in the proposed technique. A drastic reduction in the interpolation error is achieved by employing the symmetry during the interpolation process. Processing error of the proposed technique is calculated. The computational complexity of the proposed filtering technique is deduced and compared to the classical one. Results promise a significant gain of the computational efficiency and hence of the power consumption.Keywords: Level Crossing Sampling, Activity Selection, Rate Filtering, Computational Complexity, Interpolation Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15581433 Awareness Level of Green Computing among Computer Users in Kebbi State, Nigeria
Authors: A. Mubarak, A. I. Augie
Abstract:
This study investigated the awareness level of green computing possessed by computer users in Kebbi state. Survey method was employed to carry out the study. The study involved computer users from ICT business/training centers around Argungu and Birnin Kebbi areas of Kebbi state. Purposive sampling method was used to draw 156 respondents that volunteer to answer the questionnaire administered for gathering the data of the study. Out of the 156 questionnaires distributed, 121 were used for data analysis. In all, 79 respondents were from Argungu, while 42 were from Birnin Kebbi. The two research questions of the study were answered with descriptive statistic (percentage), and inferential statistics (ANOVA). The findings showed that the most of the computer users do not possess adequate awareness on conscious use of computing system. Also, the study showed that there is no significant difference regarding the consciousness of green computing possesses among computer users in Argungu and Birnin Kebbi. Based on these findings, the study suggested among others an aggressive campaign on green computing practice among computer users in Kebbi state.
Keywords: Green computing, awareness, information technology, Energy Star.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6591432 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems
Authors: Vijaya K. Srivastava, Davide Spinello
Abstract:
This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.
Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8021431 Probabilistic Electrical Power Generation Modeling Using Decimal to Binary Conversion
Authors: Ahmed S. Al-Abdulwahab
Abstract:
Generation system reliability assessment is an important task which can be performed using deterministic or probabilistic techniques. The probabilistic approaches have significant advantages over the deterministic methods. However, more complicated modeling is required by the probabilistic approaches. Power generation model is a basic requirement for this assessment. One form of the generation models is the well known capacity outage probability table (COPT). Different analytical techniques have been used to construct the COPT. These approaches require considerable mathematical modeling of the generating units. The unit-s models are combined to build the COPT which will add more burdens on the process of creating the COPT. Decimal to Binary Conversion (DBC) technique is widely and commonly applied in electronic systems and computing This paper proposes a novel utilization of the DBC to create the COPT without engaging in analytical modeling or time consuming simulations. The simple binary representation , “0 " and “1 " is used to model the states o f generating units. The proposed technique is proven to be an effective approach to build the generation model.Keywords: Decimal to Binary, generation, reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20411430 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.
Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5951429 Overview Studies of High Strength Self-Consolidating Concrete
Authors: Raya Harkouss, Bilal Hamad
Abstract:
Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology.
At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete.
A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure.
The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.
Keywords: Self-consolidating concrete (SCC), high-strength concrete, concrete admixtures, mechanical properties of hardened SCC, structural behavior of reinforced concrete beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29721428 Reliability Modeling and Data Analysis of Vacuum Circuit Breaker Subject to Random Shocks
Authors: Rafik Medjoudj, Rabah Medjoudj, D. Aissani
Abstract:
The electrical substation components are often subject to degradation due to over-voltage or over-current, caused by a short circuit or a lightning. A particular interest is given to the circuit breaker, regarding the importance of its function and its dangerous failure. This component degrades gradually due to the use, and it is also subject to the shock process resulted from the stress of isolating the fault when a short circuit occurs in the system. In this paper, based on failure mechanisms developments, the wear out of the circuit breaker contacts is modeled. The aim of this work is to evaluate its reliability and consequently its residual lifetime. The shock process is based on two random variables such as: the arrival of shocks and their magnitudes. The arrival of shocks was modeled using homogeneous Poisson process (HPP). By simulation, the dates of short-circuit arrivals were generated accompanied with their magnitudes. The same principle of simulation is applied to the amount of cumulative wear out contacts. The objective reached is to find the formulation of the wear function depending on the number of solicitations of the circuit breaker.
Keywords: reliability, short-circuit, models of shocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19391427 Lagrange and Multilevel Wavelet-Galerkin with Polynomial Time Basis for Heat Equation
Authors: Watcharakorn Thongchuay, Puntip Toghaw, Montri Maleewong
Abstract:
The Wavelet-Galerkin finite element method for solving the one-dimensional heat equation is presented in this work. Two types of basis functions which are the Lagrange and multi-level wavelet bases are employed to derive the full form of matrix system. We consider both linear and quadratic bases in the Galerkin method. Time derivative is approximated by polynomial time basis that provides easily extend the order of approximation in time space. Our numerical results show that the rate of convergences for the linear Lagrange and the linear wavelet bases are the same and in order 2 while the rate of convergences for the quadratic Lagrange and the quadratic wavelet bases are approximately in order 4. It also reveals that the wavelet basis provides an easy treatment to improve numerical resolutions that can be done by increasing just its desired levels in the multilevel construction process.Keywords: Galerkin finite element method, Heat equation , Lagrange basis function, Wavelet basis function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17291426 A Previously Underappreciated Impact on Global Warming caused by the Geometrical and Physical Properties of desert sand
Authors: Y. F. Yang, B. T. Wang, J. J. Fan, J. Yin
Abstract:
The previous researches focused on the influence of anthropogenic greenhouse gases exerting global warming, but not consider whether desert sand may warm the planet, this could be improved by accounting for sand's physical and geometric properties. Here we show, sand particles (because of their geometry) at the desert surface form an extended surface of up to 1 + π/4 times the planar area of the desert that can contact sunlight, and at shallow depths of the desert form another extended surface of at least 1 + π times the planar area that can contact air. Based on this feature, an enhanced heat exchange system between sunlight, desert sand, and air in the spaces between sand particles could be built up automatically, which can increase capture of solar energy, leading to rapid heating of the sand particles, and then the heating of sand particles will dramatically heat the air between sand particles. The thermodynamics of deserts may thus have contributed to global warming, especially significant to future global warming if the current desertification continues to expand.Keywords: global warming, desert sand, extended surface, heat exchange, thermodynamics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16321425 Comparing Sumerograms in Akkadian and Arameograms in Middle Persian
Authors: Behzad Moeini Sam, Sara Mohammadi Avandi
Abstract:
Ancient scribes usually wrote an Akkadian word in Akkadian, spelling it out syllable by syllable. Sometimes, however, they wrote down the equivalent word in Sumerian for the Akkadians held Sumerian culture, from which they had inherited the cuneiform script, in high esteem. ‘Syllabic’ vs. ‘Sumerographic’ are the two forms of cuneiform writing. The Assyrian language was a branch of the Akkadian one that used the script and language of Aramaic throughout the whole of the empire. It caused the Aramaic language to apply as an Interlingua until the following periods. This paper aims to compare Sumerograms in Akkadian texts and Arameograms in Middle Persian texts to find a continuous written system that continued to apply from Akkadian to the Middle Persian. It will firstly introduce Sumerograms which are the earliest Akkadian texts, and will finally explain the Aramaic language, which continues its use by the Parthians and Sasanians as Arameograms. Thus, the main conclusion to be drawn is that just as the Akkadians who applied Sumerograms, Parthian and Pahlavi (including the inscriptions and the Psalter), too, employed a large number of, and more or less faithfully rendered, Aramaic words, also called Arameograms.
Keywords: Sumerogram, Mesopotamian, Akkadian. Aramaic, Middle Persian.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3771424 Durability of LDPE Geomembrane within Sealing System of MSW (Landfill)
Authors: L. Menaa, A. Cherifi, K. Tigouirat, M. Choura
Abstract:
Analyse of locally manufactured Low Density Polyethylene (LDPE) durability, used within lining systems at bottom of Municipal Solid Waste (landfill), is done in the present work. For this end, short and middle time creep behavior under tension of the analyzed material is carried out. The locally manufactured material is tested and compared to the European one (LDPE-CE). Both materials was tested in 03 various mediums: ambient and two aggressive (salty water and foam water), using three specimens in each case. A testing campaign is carried out using an especially designed and achieved testing bench. Moreover, characterisation tests were carried out to evaluate the medium effect on the mechanical properties of the tested material (LDPE). Furthermore, experimental results have been used to establish a law regression which can be used to predict creep behaviour of the analyzed material. As a result, the analyzed LDPE material has showed a good stability in different ambient and aggressive mediums; as well, locally manufactured LDPE seems more flexible, compared with the European one. This makes it more useful to the desired application.
Keywords: LDPE membrane, solid waste, aggressive mediums, durability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15561423 Hydrodynamic Modeling of Infinite Reservoir using Finite Element Method
Authors: M. A. Ghorbani, M. Pasbani Khiavi
Abstract:
In this paper, the dam-reservoir interaction is analyzed using a finite element approach. The fluid is assumed to be incompressible, irrotational and inviscid. The assumed boundary conditions are that the interface of the dam and reservoir is vertical and the bottom of reservoir is rigid and horizontal. The governing equation for these boundary conditions is implemented in the developed finite element code considering the horizontal and vertical earthquake components. The weighted residual standard Galerkin finite element technique with 8-node elements is used to discretize the equation that produces a symmetric matrix equation for the damreservoir system. A new boundary condition is proposed for truncating surface of unbounded fluid domain to show the energy dissipation in the reservoir, through radiation in the infinite upstream direction. The Sommerfeld-s and perfect damping boundary conditions are also implemented for a truncated boundary to compare with the proposed far end boundary. The results are compared with an analytical solution to demonstrate the accuracy of the proposed formulation and other truncated boundary conditions in modeling the hydrodynamic response of an infinite reservoir.Keywords: Reservoir, finite element, truncated boundary, hydrodynamic pressure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23061422 Measuring Banks’ Antifragility via Fuzzy Logic
Authors: Danielle Sandler dos Passos, Helder Coelho, Flávia Mori Sarti
Abstract:
Analysing the world banking sector, we realize that traditional risk measurement methodologies no longer reflect the actual scenario with uncertainty and leave out events that can change the dynamics of markets. Considering this, regulators and financial institutions began to search more realistic models. The aim is to include external influences and interdependencies between agents, to describe and measure the operationalization of these complex systems and their risks in a more coherent and credible way. Within this context, X-Events are more frequent than assumed and, with uncertainties and constant changes, the concept of antifragility starts to gain great prominence in comparison to others methodologies of risk management. It is very useful to analyse whether a system succumbs (fragile), resists (robust) or gets benefits (antifragile) from disorder and stress. Thus, this work proposes the creation of the Banking Antifragility Index (BAI), which is based on the calculation of a triangular fuzzy number – to "quantify" qualitative criteria linked to antifragility.
Keywords: Complex adaptive systems, X-events, risk management, antifragility, banking antifragility index, triangular fuzzy number.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9001421 Exploiting Global Self Similarity for Head-Shoulder Detection
Authors: Lae-Jeong Park, Jung-Ho Moon
Abstract:
People detection from images has a variety of applications such as video surveillance and driver assistance system, but is still a challenging task and more difficult in crowded environments such as shopping malls in which occlusion of lower parts of human body often occurs. Lack of the full-body information requires more effective features than common features such as HOG. In this paper, new features are introduced that exploits global self-symmetry (GSS) characteristic in head-shoulder patterns. The features encode the similarity or difference of color histograms and oriented gradient histograms between two vertically symmetric blocks. The domain-specific features are rapid to compute from the integral images in Viola-Jones cascade-of-rejecters framework. The proposed features are evaluated with our own head-shoulder dataset that, in part, consists of a well-known INRIA pedestrian dataset. Experimental results show that the GSS features are effective in reduction of false alarmsmarginally and the gradient GSS features are preferred more often than the color GSS ones in the feature selection.
Keywords: Pedestrian detection, cascade of rejecters, feature extraction, self-symmetry, HOG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24001420 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.
Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4501419 Structural Modelling of the LiCl Aqueous Solution: Using the Hybrid Reverse Monte Carlo (HRMC) Simulation
Authors: M. Habchi, S.M. Mesli, M. Kotbi
Abstract:
The Reverse Monte Carlo (RMC) simulation is applied in the study of an aqueous electrolyte LiCl6H2O. On the basis of the available experimental neutron scattering data, RMC computes pair radial distribution functions in order to explore the structural features of the system. The obtained results include some unrealistic features. To overcome this problem, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an energy constraint in addition to the commonly used constraints derived from experimental data. Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in pair partial distribution curves. This kind of study can be considered as a useful test for a defined interaction model for conventional simulation techniques.
Keywords: RMC simulation, HRMC simulation, energy constraint, screened potential, glassy state, liquid state, partial distribution function, pair partial distribution function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14681418 AINA: Disney Animation Information as Educational Resources
Authors: Piedad Garrido, Fernando Repulles, Andy Bloor, Julio A. Sanguesa, Jesus Gallardo, Vicente Torres, Jesus Tramullas
Abstract:
With the emergence and development of Information and Communications Technologies (ICTs), Higher Education is experiencing rapid changes, not only in its teaching strategies but also in student’s learning skills. However, we have noticed that students often have difficulty when seeking innovative, useful, and interesting learning resources for their work. This is due to the lack of supervision in the selection of good query tools. This paper presents AINA, an Information Retrieval (IR) computer system aimed at providing motivating and stimulating content to both students and teachers working on different areas and at different educational levels. In particular, our proposal consists of an open virtual resource environment oriented to the vast universe of Disney comics and cartoons. Our test suite includes Disney’s long and shorts films, and we have performed some activities based on the Just In Time Teaching (JiTT) methodology. More specifically, it has been tested by groups of university and secondary school students.Keywords: Information retrieval, animation, educational resources, JiTT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12091417 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18511416 VLSI Design of 2-D Discrete Wavelet Transform for Area-Efficient and High-Speed Image Computing
Authors: Mountassar Maamoun, Mehdi Neggazi, Abdelhamid Meraghni, Daoud Berkani
Abstract:
This paper presents a VLSI design approach of a highspeed and real-time 2-D Discrete Wavelet Transform computing. The proposed architecture, based on new and fast convolution approach, reduces the hardware complexity in addition to reduce the critical path to the multiplier delay. Furthermore, an advanced twodimensional (2-D) discrete wavelet transform (DWT) implementation, with an efficient memory area, is designed to produce one output in every clock cycle. As a result, a very highspeed is attained. The system is verified, using JPEG2000 coefficients filters, on Xilinx Virtex-II Field Programmable Gate Array (FPGA) device without accessing any external memory. The resulting computing rate is up to 270 M samples/s and the (9,7) 2-D wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out memory) with 256×256 image size. In this way, the developed design requests reduced memory and provide very high-speed processing as well as high PSNR quality.Keywords: Discrete Wavelet Transform (DWT), Fast Convolution, FPGA, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19661415 Ageing and Partial Discharge Patterns in Oil-Impregnated Paper and Pressboard Insulation at High Temperature
Authors: R. H. Khawaja, T. R. Blackburn, M. Rehan Arif
Abstract:
The power transformer is the most expensive, indispensable and arguably the most important equipment item in a power system Insulation failure in transformers can cause long term interruption to supply and loss of revenue and the condition assessment of the insulation is thus an important maintenance procedure. Oil-impregnated transformer insulation consists of mainly organic materials including mineral oil and cellulose-base paper and pressboard. The operating life of cellulose-based insulation, as with most organic insulation, depends heavily on its operating temperature rise above ambient. This paper reports results of a laboratory-based experimental investigation of partial discharge (PD) activity at high temperature in oil-impregnated insulation. The experiments reported here are part an on-going programme aimed at investigating the way in which insulation deterioration can be monitored and quantified by use of partial discharge diagnostics. Partial discharge patterns were recorded and analysed during increasing and decreasing phases of the temperature. The effect of ageing of the insulation on the PD patterns in oil and oil-impregnated insulation are also considered.
Keywords: Ageing, high temperature, PD, oil-impregnated insulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29521414 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication
Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa
Abstract:
Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15191413 An Algorithm for Secure Visible Logo Embedding and Removing in Compression Domain
Authors: Hongyuan Li, Guang Liu, Yuewei Dai, Zhiquan Wang
Abstract:
Digital watermarking is the process of embedding information into a digital signal which can be used in DRM (digital rights managements) system. The visible watermark (often called logo) can indicate the owner of the copyright which can often be seen in the TV program and protects the copyright in an active way. However, most of the schemes do not consider the visible watermark removing process. To solve this problem, a visible watermarking scheme with embedding and removing process is proposed under the control of a secure template. The template generates different version of watermarks which can be seen visually the same for different users. Users with the right key can completely remove the watermark and recover the original image while the unauthorized user is prevented to remove the watermark. Experiment results show that our watermarking algorithm obtains a good visual quality and is hard to be removed by the illegally users. Additionally, the authorized users can completely remove the visible watermark and recover the original image with a good quality.Keywords: digital watermarking, visible and removablewatermark, secure template, JPEG compression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15371412 Design of Reconfigurable Parasitic Antenna for Single RF Chain MIMO Systems
Authors: C. Arunachalaperumal, B. Chandru, J. M. Mathana
Abstract:
In recent years parasitic antenna play major role in MIMO systems because of their gain and spectral efficiency. In this paper, single RF chain MIMO transmitter is designed using reconfigurable parasitic antenna. The Spatial Modulation (SM) is a recently proposed scheme in MIMO scenario which activates only one antenna at a time. The SM entirely avoids ICI and IAS, and only requires a single RF chain at the transmitter. This would switch ON a single transmit-antenna for data transmission while all the other antennas are kept silent. The purpose of the parasitic elements is to change the radiation pattern of the radio waves which is emitted from the driven element and directing them in one direction and hence introduces transmit diversity. Diode is connect between the patch and ground by changing its state (ON and OFF) the parasitic element act as reflector and director and also capable of steering azimuth and elevation angle. This can be achieved by changing the input impedance of each parasitic element through single RF chain. The switching of diode would select the single parasitic antenna for spatial modulation. This antenna is expected to achieve maximum gain with desired efficiency.
Keywords: MIMO system, single RF chain, Parasitic Antenna.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20751411 The Effect of Cow Reproductive Traits on Lifetime Productivity and Longevity
Authors: Lāsma Cielava, Daina Jonkus, Līga Paura
Abstract:
The age of first calving (AFC) is one of the most important factors that have a significant impact on cow productivity in different lactations and its whole life. A belated AFC leads to reduced reproductive performance and it is one of the main reasons for reduced longevity. Cows that calved in time period from 2001-2007 and in this time finished at least four lactations were included in the database. Data were obtained from 68841 crossbred Holstein Black and White (HM), crossbred Latvian Brown (LB), and Latvian Brown genetic resources (LBGR) cows. Cows were distributed in four groups depending on age at first calving. The longest lifespan was conducted for LBGR cows, but they were also characterized with lowest lifetime milk yield and life day milk yield. HM breed cows had the shortest lifespan, but in the lifespan of 2862.2 days was obtained in average 37916.4 kg milk accordingly 13.2 kg milk in one life day. HM breed cows were also characterized with longer calving intervals (CI) in first four lactations, but LBGR cows had the shortest CI in the study group. Age at first calving significantly affected the length of CI in different lactations (p<0.05). HM cows that first time calved >30 months old in the fourth lactation had the longest CI in all study groups (421.4 days). The LBGR cows were characterized with the shortest CI, but there was slight increase in second and third lactation. Age at first calving had a significant impact on cows’ age in each calving time. In the analysis, cow group was conducted that cows with age at first calving <24 months or in average 580.5 days at the time of fifth calving were 2156.7 days (5.9 years) old, but cows with age at first calving >30 months (932.6 days) at the time of fifth calving were 2560.9 days (7.3 years) old.
Keywords: Age at first calving, calving interval, longevity, milk yield.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16361410 Using FEM for Prediction of Thermal Post-Buckling Behavior of Thin Plates During Welding Process
Authors: Amin Esmaeilzadeh, Mohammad Sadeghi, Farhad Kolahan
Abstract:
Arc welding is an important joining process widely used in many industrial applications including production of automobile, ships structures and metal tanks. In welding process, the moving electrode causes highly non-uniform temperature distribution that leads to residual stresses and different deviations, especially buckling distortions in thin plates. In order to control the deviations and increase the quality of welded plates, a fixture can be used as a practical and low cost method with high efficiency. In this study, a coupled thermo-mechanical finite element model is coded in the software ANSYS to simulate the behavior of thin plates located by a 3-2-1 positioning system during the welding process. Computational results are compared with recent similar works to validate the finite element models. The agreement between the result of proposed model and other reported data proves that finite element modeling can accurately predict the behavior of welded thin plates.
Keywords: Welding, thin plate, buckling distortion, fixture locators, finite element modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24101409 Numerical Study of Vertical Wall Jets: Influence of the Prandtl Number
Authors: Amèni Mokni, Hatem Mhiri, Georges Le Palec, Philippe Bournot
Abstract:
This paper is a numerical investigation of a laminar isothermal plane two dimensional wall jet. Special attention has been paid to the effect of the inlet conditions at the nozzle exit on the hydrodynamic and thermal characteristics of the flow. The behaviour of various fluids evolving in both forced and mixed convection regimes near a vertical plate plane is carried out. The system of governing equations is solved with an implicit finite difference scheme. For numerical stability we use a staggered non uniform grid. The obtained results show that the effect of the Prandtl number is significant in the plume region in which the jet flow is governed by buoyant forces. Further for ascending X values, the buoyancy forces become dominating, and a certain agreement between the temperature profiles are observed, which shows that the velocity profile has no longer influence on the wall temperature evolution in this region. Fluids with low Prandtl number warm up more importantly, because for such fluids the effect of heat diffusion is higher.Keywords: Forced convection, Mixed convection, Prandtl number, Wall jet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17781408 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique
Authors: S. Jalaja, A. M. Vijaya Prakash
Abstract:
Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.Keywords: Carry save adder Karatsuba multiplication, mid-range Karatsuba multiplication, modified FFA, transposed filter, retiming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911