Search results for: Original KNN
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 646

Search results for: Original KNN

136 Low Resolution Single Neural Network Based Face Recognition

Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum

Abstract:

This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.

Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1718
135 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.

Keywords: Hash Collision, LSB, MD5, PSNR, SHA160

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
134 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines

Authors: Mrs.K.Kavitha, S.Arivazhagan

Abstract:

A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.

Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
133 An Efficient Biometric Cryptosystem using Autocorrelators

Authors: R. Bremananth, A. Chitra

Abstract:

Cryptography provides the secure manner of information transmission over the insecure channel. It authenticates messages based on the key but not on the user. It requires a lengthy key to encrypt and decrypt the sending and receiving the messages, respectively. But these keys can be guessed or cracked. Moreover, Maintaining and sharing lengthy, random keys in enciphering and deciphering process is the critical problem in the cryptography system. A new approach is described for generating a crypto key, which is acquired from a person-s iris pattern. In the biometric field, template created by the biometric algorithm can only be authenticated with the same person. Among the biometric templates, iris features can efficiently be distinguished with individuals and produces less false positives in the larger population. This type of iris code distribution provides merely less intra-class variability that aids the cryptosystem to confidently decrypt messages with an exact matching of iris pattern. In this proposed approach, the iris features are extracted using multi resolution wavelets. It produces 135-bit iris codes from each subject and is used for encrypting/decrypting the messages. The autocorrelators are used to recall original messages from the partially corrupted data produced by the decryption process. It intends to resolve the repudiation and key management problems. Results were analyzed in both conventional iris cryptography system (CIC) and non-repudiation iris cryptography system (NRIC). It shows that this new approach provides considerably high authentication in enciphering and deciphering processes.

Keywords: Autocorrelators, biometrics cryptography, irispatterns, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
132 Effects of Signaling on the Performance of Directed Diffusion Routing Protocol

Authors: Apidet Booranawong

Abstract:

In an original directed diffusion routing protocol, a sink requests sensing data from a source node by flooding interest messages to the network. Then, the source finds the sink by sending exploratory data messages to all nodes that generate incoming interest messages. This protocol signaling can cause heavy traffic in the network, an interference of the radio signal, collisions, great energy consumption of sensor nodes, etc. According to this research problem, this paper investigates the effect of sending interest and exploratory data messages on the performance of directed diffusion routing protocol. We demonstrate the research problem occurred from employing directed diffusion protocol in mobile wireless environments. For this purpose, we perform a set of experiments by using NS2 (network simulator 2). The radio propagation models; Two-ray ground reflection with and without shadow fading are included to investigate the effect of signaling. The simulation results show that the number of times of sent and received protocol signaling in the case of sending interest and exploratory data messages are larger than the case of sending other protocol signals, especially in the case of shadowing model. Additionally, the number of exploratory data message is largest in one round of the protocol procedure.

Keywords: Directed diffusion, Flooding, Interest message, Exploratory data message, Radio propagation model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
131 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
130 A Combined Conventional and Differential Evolution Method for Model Order Reduction

Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil

Abstract:

In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.

Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130
129 Feasibility Study on Designing a Flat Loop Heat Pipe (LHP) to Recover the Heat from Exhaust of a Gas Turbine

Authors: M.H.Ghaffari

Abstract:

A theoretical study is conducted to design and explore the effect of different parameters such as heat loads, the tube size of piping system, wick thickness, porosity and hole size on the performance and capability of a Loop Heat Pipe(LHP). This paper presents a steady state model that describes the different phenomena inside a LHP. Loop Heat Pipes(LHPs) are two-phase heat transfer devices with capillary pumping of a working fluid. By their original design comparing with heat pipes and special properties of the capillary structure, they-re capable of transferring heat efficiency for distances up to several meters at any orientation in the gravity field, or to several meters in a horizontal position. This theoretical model is described by different relations to satisfy important limits such as capillary and nucleate boiling. An algorithm is developed to predict the size of the LHP satisfying the limitations mentioned above for a wide range of applied loads. Finally, to assess and evaluate the algorithm and all the relations considered, we have used to design a new kind of LHP to recover the heat from the exhaust of an actual Gas Turbine. By finding the results, it showed that we can use the LHP as a very high efficient device to recover the heat even in high amount of loads(exhaust of a gas turbine). The sizes of all parts of the LHP were obtained using the developed algorithm.

Keywords: Loop Heat Pipe, Head Load, Liquid-Vapor Interface, Heat Transfer, Design Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
128 Material Density Mapping on Deformable 3D Models of Human Organs

Authors: Petru Manescu, Joseph Azencot, Michael Beuve, Hamid Ladjal, Jacques Saade, Jean-Michel Morreau, Philippe Giraud, Behzad Shariat

Abstract:

Organ motion, especially respiratory motion, is a technical challenge to radiation therapy planning and dosimetry. This motion induces displacements and deformation of the organ tissues within the irradiated region which need to be taken into account when simulating dose distribution during treatment. Finite element modeling (FEM) can provide a great insight into the mechanical behavior of the organs, since they are based on the biomechanical material properties, complex geometry of organs, and anatomical boundary conditions. In this paper we present an original approach that offers the possibility to combine image-based biomechanical models with particle transport simulations. We propose a new method to map material density information issued from CT images to deformable tetrahedral meshes. Based on the principle of mass conservation our method can correlate density variation of organ tissues with geometrical deformations during the different phases of the respiratory cycle. The first results are particularly encouraging, as local error quantification of density mapping on organ geometry and density variation with organ motion are performed to evaluate and validate our approach.

Keywords: Biomechanical simulation, dose distribution, image guided radiation therapy, organ motion, tetrahedral mesh, 4D-CT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2971
127 Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques

Authors: Rana Yousef, Khalil el Hindi

Abstract:

The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.

Keywords: Radial basis function networks, Instance-based reduction, PNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
126 A Detailed Experimental Study of the Springback Anisotropy of Three Metals using the Stretching-Bending Process

Authors: A. Soualem

Abstract:

Springback is a significant problem in the sheet metal forming process. When the tools are released after the stage of forming, the product springs out, because of the action of the internal stresses. In many cases the deviation of form is too large and the compensation of the springback is necessary. The precise prediction of the springback of product is increasingly significant for the design of the tools and for compensation because of the higher ratio of the yield stress to the elastic modulus. The main object in this paper was to study the effect of the anisotropy on the springback for three directions of rolling: 0°, 45° and 90°. At the same time, we highlighted the influence of three different metallic materials: Aluminum, Steel and Galvanized steel. The original of our purpose consist on tests which are ensured by adapting a U-type stretching-bending device on a tensile testing machine, where we studied and quantified the variation of the springback according to the direction of rolling. We also showed the role of lubrication in the reduction of the springback. Moreover, in this work, we have studied important characteristics in deep drawing process which is a springback. We have presented defaults that are showed in this process and many parameters influenced a springback. Finally, our results works lead us to understand the influence of grains orientation with different metallic materials on the springback and drawing some conclusions how to concept deep drawing tools. In addition, the conducted work represents a fundamental contribution in the discussion the industry application.

Keywords: Deep-Drawing, Grains orientation, Laminate Tool, Springback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2054
125 Hacking's 'Between Goffman and Foucault': A Theoretical Frame for Criminology

Authors: Tomás Speziale

Abstract:

This paper aims to analyse how Ian Hacking states the theoretical basis of his research on the classification of people. Although all his early philosophical education had been based in Foucault, it is also true that Erving Goffman’s perspective provided him with epistemological and methodological tools for understanding face-to-face relationships. Hence, all his works must be thought of as social science texts that combine the research on how the individuals are constituted ‘top-down’ (as in Foucault), with the inquiry into how people renegotiate ‘bottom-up’ the classifications about them. Thus, Hacking´s proposal constitutes a middle ground between the French Philosopher and the American Sociologist. Placing himself between both authors allows Hacking to build a frame that is expected to adjust to Social Sciences’ main particularity: the fact that they study interactive kinds. These are kinds of people, which imply that those who are classified can change in certain ways that prompt the need for changing previous classifications themselves. It is all about the interaction between the labelling of people and the people who are classified. Consequently, understanding the way in which Hacking uses Foucault’s and Goffman’s theories is essential to fully comprehend the social dynamic between individuals and concepts, what Bert Hansen had called dialectical realism. His theoretical proposal, therefore, is not only valuable because it combines diverse perspectives, but also because it constitutes an utterly original and relevant framework for Sociological theory and particularly for Criminology.

Keywords: Classification of people, Foucault`s archaeology, Goffman`s interpersonal sociology, interactive kinds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
124 Power Ultrasound Application on Convective Drying of Banana (Musa paradisiaca), Mango (Mangifera indica L.) and Guava (Psidium guajava L.)

Authors: Erika K. Méndez, Carlos E. Orrego, Diana L. Manrique, Juan D. Gonzalez, Doménica Vallejo

Abstract:

High moisture content in fruits generates post-harvest problems such as mechanical, biochemical, microbial and physical losses. Dehydration, which is based on the reduction of water activity of the fruit, is a common option for overcoming such losses. However, regular hot air drying could affect negatively the quality properties of the fruit due to the long residence time at high temperature. Power ultrasound (US) application during the convective drying has been used as a novel method able to enhance drying rate and, consequently, to decrease drying time. In the present study, a new approach was tested to evaluate the effect of US on the drying time, the final antioxidant activity (AA) and the total polyphenol content (TPC) of banana slices (BS), mango slices (MS) and guava slices (GS). There were also studied the drying kinetics with nine different models from which water effective diffusivities (Deff) (with or without shrinkage corrections) were calculated. Compared with the corresponding control tests, US assisted drying for fruit slices showed reductions in drying time between 16.23 and 30.19%, 11.34 and 32.73%, and 19.25 and 47.51% for the MS, BS and GS respectively. Considering shrinkage effects, Deff calculated values ranged from 1.67*10-10 to 3.18*10-10 m2/s, 3.96*10-10 and 5.57*10-10 m2/s and 4.61*10-10 to 8.16*10-10 m2/s for the BS, MS and GS samples respectively. Reductions of TPC and AA (as DPPH) were observed compared with the original content in fresh fruit data in all kinds of drying assays.

Keywords: Banana, drying, effective diffusivity, guava, mango, ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2383
123 Optimization of a New Three-Phase High Voltage Power Supply for Industrial Microwaves Generators with N Magnetrons by Phase (Treated Case N=1)

Authors: M. Bassoui, M. Ferfra, M. Chraygane, M. Ould Ahmedou, N. Elghazal, A. Belhaiba

Abstract:

Currently, the High voltage power supply for microwave generators with one magnetron uses a single-phase transformer with magnetic shunt. To contribute in the development of technological innovation in industry of manufacturing of power supplies of magnetrons for microwaves, ovens for domestic or industrial use, this original work treats the optimization of a new three-phase high voltage power supply for industrial microwaves generators with N magnetrons by phase (Treated case N=1), from its modeling with Matlab-Simulink. The design of this power supply uses three π quadruple models equivalents of new three-phase transformer with magnetic shunt of each phase. Every one supplies at its output a voltage doubler cell composed of a capacitor and a diode that in its output supplies only one magnetron.  In this work we will define a strategy that aims to reduce the volume of the transformer and the weight and cost of the entire system of the high voltage power supply, while respecting the conditions recommended by the manufacturer, concerning the current flowing in each magnetron: (Imax <1.2 A, IAv ≈ 300 mA).

 

Keywords: Optimization, Three-phase transformer, Modeling, power supply, magnetrons, Matlab Simulink, High Voltage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772
122 BeamGA Median: A Hybrid Heuristic Search Approach

Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte

Abstract:

The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.

Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 941
121 A Detailed Experimental Study and Evaluation of Springback under Stretch Bending Process

Authors: A. Soualem

Abstract:

The design of multi stage deep drawing processes requires the evaluation of many process parameters such as the intermediate die geometry, the blank shape, the sheet thickness, the blank holder force, friction, lubrication etc..These process parameters have to be determined for the optimum forming conditions before the process design. In general sheet metal forming may involve stretching drawing or various combinations of these basic modes of deformation. It is important to determine the influence of the process variables in the design of sheet metal working process. Especially, the punch and die corner for deep drawing will affect the formability. At the same time the prediction of sheet metals springback after deep drawing is an important issue to solve for the control of manufacturing processes. Nowadays, the importance of this problem increases because of the use of steel sheeting with high stress and also aluminum alloys.

The aim of this paper is to give a better understanding of the springback and its effect in various sheet metals forming process such as expansion and restreint deep drawing in the cup drawing process, by varying radius die, lubricant for two commercially available materials e.g. galvanized steel and Aluminum sheet. To achieve these goals experiments were carried out and compared with other results. The original of our purpose consist on tests which are ensured by adapting a U-type stretching-bending device on a tensile testing machine, where we studied and quantified the variation of the springback.

Keywords: Deep drawing, Expansion, Restreint deep drawing, Springback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2499
120 Maximizer of the Posterior Marginal Estimate of Phase Unwrapping Based On Statistical Mechanics of the Q-Ising Model

Authors: Yohei Saika, Tatsuya Uezu

Abstract:

We constructed a method of phase unwrapping for a typical wave-front by utilizing the maximizer of the posterior marginal (MPM) estimate corresponding to equilibrium statistical mechanics of the three-state Ising model on a square lattice on the basis of an analogy between statistical mechanics and Bayesian inference. We investigated the static properties of an MPM estimate from a phase diagram using Monte Carlo simulation for a typical wave-front with synthetic aperture radar (SAR) interferometry. The simulations clarified that the surface-consistency conditions were useful for extending the phase where the MPM estimate was successful in phase unwrapping with a high degree of accuracy and that introducing prior information into the MPM estimate also made it possible to extend the phase under the constraint of the surface-consistency conditions with a high degree of accuracy. We also found that the MPM estimate could be used to reconstruct the original wave-fronts more smoothly, if we appropriately tuned hyper-parameters corresponding to temperature to utilize fluctuations around the MAP solution. Also, from the viewpoint of statistical mechanics of the Q-Ising model, we found that the MPM estimate was regarded as a method for searching the ground state by utilizing thermal fluctuations under the constraint of the surface-consistency condition.

Keywords: Bayesian inference, maximizer of the posterior marginal estimate, phase unwrapping, Monte Carlo simulation, statistical mechanics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
119 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
118 A Novel Machining Signal Filtering Technique: Z-notch Filter

Authors: Nuawi M. Z., Lamin F., Ismail A. R., Abdullah S., Wahid Z.

Abstract:

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Keywords: Digital signal filtering, I-kaz method, Machiningmonitoring, Noise Cancelling, Sound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
117 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops

Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding

Abstract:

Background: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, for which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. Objective: This article tried to optimize the layout of a troops’ cafeteria and to improve the overall efficiency of the dining process. Methods: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. Results: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interferences reduced as well, which verified corresponding simulation results. Conclusion: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.

Keywords: Troops’ cafeteria, layout optimization, dining efficiency, AnyLogic simulation, field experiment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444
116 Nanostructure of Gamma-Alumina Prepared by a Modified Sol-Gel Technique

Authors: Débora N. Zambrano, Marina O. Gosatti, Leandro M. Dufou, Daniel A. Serrano, M. Mónica Guraya, Soledad Perez-Catán

Abstract:

Nanoporous g-Al2O3 samples were synthesized via a sol-gel technique, introducing changes in the Yoldas´ method. The aim of the work was to achieve an effective control of the nanostructure properties and morphology of the final g-Al2O3. The influence of the reagent temperature during the hydrolysis was evaluated in case of water at 5 ºC and 98 ºC, and alkoxide at -18 ºC and room temperature. Sol-gel transitions were performed at 120 ºC and room temperature. All g-Al2O3 samples were characterized by X-ray diffraction, nitrogen adsorption and thermal analysis. Our results showed that temperature of both water and alkoxide has not much influence on the nanostructure of the final g-Al2O3, thus giving a structure very similar to that of samples obtained by the reference method as long as the reaction temperature above 75 ºC is reached soon enough. XRD characterization showed diffraction patterns corresponding to g-Al2O3 for all samples. Also BET specific area values (253-280 m2/g) were similar to those obtained by Yoldas’s original method. The temperature of the sol-gel transition does not affect the resulting sample structure, and crystalline boehmite particles were identified in all dried gels. We analyzed the reproducibility of the samples’ structure by preparing different samples under identical conditions; we found that performing the sol-gel transition at 120 ºC favors the production of more reproducible samples and also reduces significantly the time of the sol-gel reaction.

Keywords: Nanostructure alumina, boehmite, sol-gel technique, N2 adsorption/desorption isotherm, pore size distribution, BET area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309
115 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

Authors: Vijaya Prakash.A.M, K.S.Gurumurthy

Abstract:

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3112
114 Gender Discrimination in Education in Croatia

Authors: Ivana Šalinović

Abstract:

The term gender emerged in the second half of the last century and since then a growing body of research dealing with the topic demonstrates its importance. Primarily, the research and the theories that were addressing the topic were focused on stating the differences between the terms sex and gender, where sex refers to the biological aspect of a person, while gender refers to the socially ascribed roles, attitudes, behaviors, and etc., and on gender discrimination whose visible and invisible repercussions are harming society and one of the agents of change should be educators on all educational levels since they are emotionally sculpting their students, that is why considerable effort should be put into implementing education about this topic into the standard curriculum. Not only educators, but it is also necessary to change the mindset of the younger generations because they will be important agents in the further elimination of gender discrimination, thus causing societal changes. Therefore, it is very important to hear their voices and their experiences and for these reasons, this research has been done, to see what the students of the second year at a private college university Aspira in Croatia have gone through in their educational ladder. The hypothesis was that the findings would most certainly show a huge difference between female and male students’ experiences and effects of gender discrimination, but the results have actually shown a very mixed picture and the original hypothesis was somewhat disapproved. Instead of finding out that girls experienced a lot of gender discrimination, it turned out that it was the boys who believed that in their previous and current education, there was no equal time distribution between genders, they noticed that the language was not gender-sensitive, teaching aids were not adopted to the genders. They were also the ones that pointed out that the discipline path was not the same for everyone, and they were the ones that the teacher’s gender had more influence on and were also the ones that experienced more gender discrimination.

Keywords: Gender, discrimination, elementary school, high school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 383
113 Enhancement of Rice Straw Composting Using UV Induced Mutants of Penicillium Strain

Authors: T. N. M. El Sebai, A. A.Khattab, Wafaa M. Abd-El Rahim, H. Moawad

Abstract:

Fungal mutant strains have produced cellulase and xylanase enzymes, and have induced high hydrolysis with enhanced of rice straw. The mutants were obtained by exposing Penicillium strain to UV-light treatments. Screening and selection after treatment with UV-light were carried out using cellulolytic and xylanolytic clear zones method to select the hypercellulolytic and hyperxylanolytic mutants. These mutants were evaluated for their cellulase and xylanase enzyme production as well as their abilities for biodegradation of rice straw. The mutant 12 UV/1 produced 306.21% and 209.91% cellulase and xylanase, respectively, as compared with the original wild type strain. This mutant showed high capacity of rice straw degradation. The effectiveness of tested mutant strain and that of wild strain was compared in relation to enhancing the composting process of rice straw and animal manures mixture. The results obtained showed that the compost product of inoculated mixture with mutant strain (12 UV/1) was the best compared to the wild strain and un-inoculated mixture. Analysis of the composted materials showed that the characteristics of the produced compost were close to those of the high quality standard compost. The results obtained in the present work suggest that the combination between rice straw and animal manure could be used for enhancing the composting process of rice straw and particularly when applied with fungal decomposer accelerating the composting process.

Keywords: Rice straw, composting, UV mutants, Penicillium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
112 A Modular On-line Profit Sharing Approach in Multiagent Domains

Authors: Pucheng Zhou, Bingrong Hong

Abstract:

How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.

Keywords: Multi-agent learning; reinforcement learning; rationalprofit sharing; modular architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403
111 Producing Outdoor Design Conditions Based on the Dependency between Meteorological Elements: Copula Approach

Authors: Zhichao Jiao, Craig Farnham, Jihui Yuan, Kazuo Emura

Abstract:

It is common to use the outdoor design weather data to select the air-conditioning capacity in the building design stage. The meteorological elements of outdoor design weather data are usually selected based on their excess frequency separately while the dependency between the elements is not well considered. It means that the simultaneous occurrence probability of these elements is smaller than the original excess frequency which may cause an overestimation of selecting air-conditioning capacity. Therefore, the copula approach which can capture the dependency between multivariate data was used to model the joint distributions of the meteorological elements, like air temperature and global solar radiation. We suggest a method based on the specific simultaneous occurrence probability of these two elements of selecting more credible outdoor design conditions. The hourly weather data at 12 noon from 2001 to 2010 in Tokyo, Japan are used to analyze the dependency structure and joint distribution, the Gaussian copula represents the dependence of data best. According to calculating the air temperature and global solar radiation in specific simultaneous occurrence probability and the common exceeding, the results show that both the air temperature and global solar radiation based on simultaneous occurrence probability are lower than these based on the conventional method in the same probability.

Keywords: Copula approach, Design weather database, energy conservation, HVAC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 294
110 Energy Conscious Builder Design Pattern with C# and Intermediate Language

Authors: Kayun Chantarasathaporn, Chonawat Srisa-an

Abstract:

Design Patterns have gained more and more acceptances since their emerging in software development world last decade and become another de facto standard of essential knowledge for Object-Oriented Programming developers nowadays. Their target usage, from the beginning, was for regular computers, so, minimizing power consumption had never been a concern. However, in this decade, demands of more complicated software for running on mobile devices has grown rapidly as the much higher performance portable gadgets have been supplied to the market continuously. To get along with time to market that is business reason, the section of software development for power conscious, battery, devices has shifted itself from using specific low-level languages to higher level ones. Currently, complicated software running on mobile devices are often developed by high level languages those support OOP concepts. These cause the trend of embracing Design Patterns to mobile world. However, using Design Patterns directly in software development for power conscious systems is not recommended because they were not originally designed for such environment. This paper demonstrates the adapted Design Pattern for power limitation system. Because there are numerous original design patterns, it is not possible to mention the whole at once. So, this paper focuses only in creating Energy Conscious version of existing regular "Builder Pattern" to be appropriated for developing low power consumption software.

Keywords: Design Patterns, Builder Pattern, Low Power Consumption, Object Oriented Programming, Power Conscious System, Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
109 A Case Study on the Value of Corporate Social Responsibility Systems

Authors: José M. Brotons, Manuel E. Sansalvador

Abstract:

The relationship between Corporate Social Responsibility (CSR) and financial performance (FP) is a subject of great interest that has not yet been resolved. In this work, we have developed a new and original tool to measure this relation. The tool quantifies the value contributed to companies that are committed to CSR. The theoretical model used is the fuzzy discounted cash flow method. Two assumptions have been considered, the first, the company has implemented the IQNet SR10 certification, and the second, the company has not implemented that certification. For the first one, the growth rate used for the time horizon is the rate maintained by the company after obtaining the IQNet SR10 certificate. For the second one, both, the growth rates company prior to the implementation of the certification, and the evolution of the sector will be taken into account. By using triangular fuzzy numbers, it is possible to deal adequately with each company’s forecasts as well as the information corresponding to the sector. Once the annual growth rate of the sales is obtained, the profit and loss accounts are generated from the annual estimate sales. For the remaining elements of this account, their regression with the nets sales has been considered. The difference between these two valuations, made in a fuzzy environment, allows obtaining the value of the IQNet SR10 certification. Although this study presents an innovative methodology to quantify the relation between CSR and FP, the authors are aware that only one company has been analyzed. This is precisely the main limitation of this study which in turn opens up an interesting line for future research: to broaden the sample of companies.

Keywords: Corporate social responsibility, case study, financial performance, company valuation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
108 Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns

Authors: Christian Arcos, Marley Vellasco, Abraham Alcaim

Abstract:

In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario.

Keywords: Binary labels, local binary patterns, mask, wavelet coefficients, speech enhancement, speech recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
107 Entrepreneurship Skills Acquisition through Education: Impact of the Nurturance of Knowledge, Skills, and Attitude on New Venture Creation

Authors: Satya Ranjan Acharya, Yamini Chandra

Abstract:

Entrepreneurship through higher education has taken a paradigm shift from traditional classroom lecture series method to a modern approach, which lay emphasis on nurturing competencies, enhancing knowledge, skills, attitudes/abilities (KSA), which has positive impact on the development of core capabilities. The present paper was focused on the analysis of entrepreneurship education as a pedagogical intervention for the post-graduate program offered at the Entrepreneurship Development Institute of India, Gujarat, India. The study is focused on a model with special emphasis on developing KSA and its effect on nurturing entrepreneurial spirit within students. The findings represent demographic and thematic assessment of the implemented pedagogical model with an outcome of students choosing a career in new venture creation or growth/diversification of family owned businesses. This research will be helpful for academicians, research scholars, potential entrepreneurs, ecosystem enablers and students to infer the effectiveness of nurturing entrepreneurial skills and bringing more changes in personal attitudes by the way of enhancing the knowledge and skills required for the execution of an entrepreneurial career. This research is original in nature as it provides an in-depth insight into an implemented model of curriculum, focused on the development and nurturance of basic skills and its impact on the career choice of students.

Keywords: Attitude, entrepreneurship education, knowledge, new venture creation, pedagogical intervention, skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782