Search results for: Original KNN
170 A Two-Step Approach for Tree-structured XPath Query Reduction
Authors: Minsoo Lee, Yun-mi Kim, Yoon-kyung Lee
Abstract:
XML data consists of a very flexible tree-structure which makes it difficult to support the storing and retrieving of XML data. The node numbering scheme is one of the most popular approaches to store XML in relational databases. Together with the node numbering storage scheme, structural joins can be used to efficiently process the hierarchical relationships in XML. However, in order to process a tree-structured XPath query containing several hierarchical relationships and conditional sentences on XML data, many structural joins need to be carried out, which results in a high query execution cost. This paper introduces mechanisms to reduce the XPath queries including branch nodes into a much more efficient form with less numbers of structural joins. A two step approach is proposed. The first step merges duplicate nodes in the tree-structured query and the second step divides the query into sub-queries, shortens the paths and then merges the sub-queries back together. The proposed approach can highly contribute to the efficient execution of XML queries. Experimental results show that the proposed scheme can reduce the query execution cost by up to an order of magnitude of the original execution cost.Keywords: XML, Xpath, tree-structured query, query reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552169 The Management in Large Emergency Situations – A Best Practise Case Study based on GIS for Management of Evacuation
Authors: Ion Baş, Claudiu Zoicaş, Angela Ioniţâ
Abstract:
In most of the cases, natural disasters lead to the necessity of evacuating people. The quality of evacuation management is dramatically improved by the use of information provided by decision support systems, which become indispensable in case of large scale evacuation operations. This paper presents a best practice case study. In November 2007, officers from the Emergency Situations Inspectorate “Crisana" of Bihor County from Romania participated to a cross-border evacuation exercise, when 700 people have been evacuated from Netherlands to Belgium. One of the main objectives of the exercise was the test of four different decision support systems. Afterwards, based on that experience, software system called TEVAC (Trans Border Evacuation) has been developed “in house" by the experts of this institution. This original software system was successfully tested in September 2008, during the deployment of the international exercise EU-HUROMEX 2008, the scenario involving real evacuation of 200 persons from Hungary to Romania. Based on the lessons learned and results, starting from April 2009, the TEVAC software is used by all Emergency Situations Inspectorates all over Romania.Keywords: Emergency evacuation, Searching Features, TEVAC(Trans Border Evacuation) software system, User Interface Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1587168 The Robust Clustering with Reduction Dimension
Authors: Dyah E. Herwindiati
Abstract:
A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paperKeywords: Breakdown point, Consistency, 2DPCA, PCA, Outlier, Vector Variance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700167 An Application for Risk of Crime Prediction Using Machine Learning
Authors: Luis Fonseca, Filipe Cabral Pinto, Susana Sargento
Abstract:
The increase of the world population, especially in large urban centers, has resulted in new challenges particularly with the control and optimization of public safety. Thus, in the present work, a solution is proposed for the prediction of criminal occurrences in a city based on historical data of incidents and demographic information. The entire research and implementation will be presented start with the data collection from its original source, the treatment and transformations applied to them, choice and the evaluation and implementation of the Machine Learning model up to the application layer. Classification models will be implemented to predict criminal risk for a given time interval and location. Machine Learning algorithms such as Random Forest, Neural Networks, K-Nearest Neighbors and Logistic Regression will be used to predict occurrences, and their performance will be compared according to the data processing and transformation used. The results show that the use of Machine Learning techniques helps to anticipate criminal occurrences, which contributed to the reinforcement of public security. Finally, the models were implemented on a platform that will provide an API to enable other entities to make requests for predictions in real-time. An application will also be presented where it is possible to show criminal predictions visually.Keywords: Crime prediction, machine learning, public safety, smart city.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332166 Computer Aided Drug Design and Studies of Antiviral Drug against H3N2 Influenza Virus
Authors: Aditi Shukla, Ambarish S. Vidyarthi, Subir Samanta
Abstract:
The worldwide prevalence of H3N2 influenza virus and its increasing resistance to the existing drugs necessitates for the development of an improved/better targeting anti-influenza drug. H3N2 influenza neuraminidase is one of the two membrane-bound proteins belonging to group-2 neuraminidases. It acts as key player involved in viral pathogenicity and hence, is an important target of anti-influenza drugs. Oseltamivir is one of the potent drugs targeting this neuraminidase. In the present work, we have taken subtype N2 neuraminidase as the receptor and probable analogs of oseltamivir as drug molecules to study the protein-drug interaction in anticipation of finding efficient modified candidate compound. Oseltamivir analogs were made by modifying the functional groups using Marvin Sketch software and were docked using Schrodinger-s Glide. Oseltamivir analog 10 was detected to have significant energy value (16% less compared to Oseltamivir) and could be the probable lead molecule. It infers that some of the modified compounds can interact in a novel manner with increased hydrogen bonding at the active site of neuraminidase and it might be better than the original drug. Further work can be carried out such as enzymatic inhibition studies; synthesis and crystallizing the drug-target complex to analyze the interactions biologically.Keywords: H3N2 Influenza, Neuraminidase, Oseltamiviranalogs, structure based drug designing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2542165 Aircraft Automatic Collision Avoidance Using Spiral Geometric Approach
Authors: M. Orefice, V. Di Vito
Abstract:
This paper provides a description of a Collision Avoidance algorithm that has been developed starting from the mathematical modeling of the flight of insects, in terms of spirals and conchospirals geometric paths. It is able to calculate a proper avoidance manoeuver aimed to prevent the infringement of a predefined distance threshold between ownship and the considered intruder, while minimizing the ownship trajectory deviation from the original path and in compliance with the aircraft performance limitations and dynamic constraints. The algorithm is designed in order to be suitable for real-time applications, so that it can be considered for the implementation in the most recent airborne automatic collision avoidance systems using the traffic data received through an ADS-B IN device. The presented approach is able to take into account the rules-of-the-air, due to the possibility to select, through specifically designed decision making logic based on the consideration of the encounter geometry, the direction of the calculated collision avoidance manoeuver that allows complying with the rules-of-the-air, as for instance the fundamental right of way rule. In the paper, the proposed collision avoidance algorithm is presented and its preliminary design and software implementation is described. The applicability of this method has been proved through preliminary simulation tests performed in a 2D environment considering single intruder encounter geometries, as reported and discussed in the paper.
Keywords: collision avoidance, RPAS, spiral geometry, ADS-B based application
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668164 Comparison of Different Discontinuous PWM Technique for Switching Losses Reduction in Modular Multilevel Converters
Authors: Kaumil B. Shah, Hina Chandwani
Abstract:
The modular multilevel converter (MMC) is one of the advanced topologies for medium and high-voltage applications. In high-power, high-voltage MMC, a large number of switching power devices are required. These switching power devices (IGBT) considerable switching losses. This paper analyzes the performance of different discontinuous pulse width modulation (DPWM) techniques and compares the results against a conventional carrier based pulse width modulation method, in order to reduce the switching losses of an MMC. The DPWM reference wave can be generated by adding the zero-sequence component to the original (sine) reference modulation signal. The result of the addition gives the reference signal of DPWM techniques. To minimize the switching losses of the MMC, the clamping period is controlled according to the absolute value of the output load current. No switching is generated in the clamping period so overall switching of the power device is reduced. The simulation result of the different DPWM techniques is compared with conventional carrier-based pulse-width modulation technique.Keywords: Modular multilevel converter, discontinuous pulse width modulation, switching losses, zero-sequence voltage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921163 Criminal Law Instruments to Counter Corporate Crimes in Poland
Authors: Dorota Habrat
Abstract:
The aim of study was to analyze the functioning the new model of criminal corporate responsibility in Poland. The need to introduce into the Polish legal system liability of corporate (collective entities) has resulted, among others, from the Polish Republic's international commitments, in particular related to membership in the European Union. The study showed that responsibility of collective entities under the Act has a criminal nature. The main question concerns the ability of the collective entity to be brought to guilt under criminal law sense. Polish criminal law knows only the responsibility of individual persons. So far, guilt as a personal feature of action, based on the ability of the offender to feel in his psyche, could be considered only in relation to the individual person, while the said Act destroyed this conviction. Guilt of collective entity must be proven under at least one of the three possible forms: the guilt in the selection or supervision and so called organizational guilt. In addition, research in article has resolved the issue how the principle of proportionality in relation to criminal measures in response of collective entities should be considered. It should be remembered that the legal subjectivity of collective entities, including their rights and freedoms, is an emanation of the rights and freedoms of individual persons which create collective entities and through these entities implement their rights and freedoms. The whole study was proved that the adopted Act largely reflects the international legal regulations but also contains the unknown and original legislative solutions.Keywords: Criminal corporate responsibility, Polish criminal law.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561162 MPSO based Model Order Formulation Technique for SISO Continuous Systems
Authors: S. N. Deepa, G. Sugumaran
Abstract:
This paper proposes a new version of the Particle Swarm Optimization (PSO) namely, Modified PSO (MPSO) for model order formulation of Single Input Single Output (SISO) linear time invariant continuous systems. In the General PSO, the movement of a particle is governed by three behaviors namely inertia, cognitive and social. The cognitive behavior helps the particle to remember its previous visited best position. In Modified PSO technique split the cognitive behavior into two sections like previous visited best position and also previous visited worst position. This modification helps the particle to search the target very effectively. MPSO approach is proposed to formulate the higher order model. The method based on the minimization of error between the transient responses of original higher order model and the reduced order model pertaining to the unit step input. The results obtained are compared with the earlier techniques utilized, to validate its ease of computation. The proposed method is illustrated through numerical example from literature.Keywords: Continuous System, Model Order Formulation, Modified Particle Swarm Optimization, Single Input Single Output, Transfer Function Approach
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786161 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis
Authors: Gaoyong Luo
Abstract:
The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030160 A New Image Psychovisual Coding Quality Measurement based Region of Interest
Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf
Abstract:
To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.
Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501159 The Concept of an Agile Enterprise Research Model
Authors: Maja Sajdak
Abstract:
The aim of this paper is to present the concept of an agile enterprise model and to initiate discussion on the research assumptions of the model presented. The implementation of the research project "The agility of enterprises in the process of adapting to the environment and its changes" began in August 2014 and is planned to last three years. The article has the form of a work-inprogress paper which aims to verify and initiate a debate over the proposed research model. In the literature there are very few publications relating to research into agility; it can be concluded that the most controversial issue in this regard is the method of measuring agility. In previous studies the operationalization of agility was often fragmentary, focusing only on selected areas of agility, for example manufacturing, or analysing only selected sectors. As a result the measures created to date can only be treated as contributory to the development of precise measurement tools. This research project aims to fill a cognitive gap in the literature with regard to the conceptualization and operationalization of an agile company. Thus, the original contribution of the author of this project is the construction of a theoretical model that integrates manufacturing agility (consisting mainly in adaptation to the environment) and strategic agility (based on proactive measures). The author of this research project is primarily interested in the attributes of an agile enterprise which indicate that the company is able to rapidly adapt to changing circumstances and behave pro-actively.Keywords: Agile company, acuity, entrepreneurship, flexibility, research model, strategic leadership.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490158 A Study on Explicitation Strategies Employed in Persian Subtitling of English Crime Movies
Authors: Hossein Heidari Tabrizi, Azizeh Chalak, Hossein Enayat
Abstract:
The present study seeks to investigate the application of expansion strategy in Persian subtitles of English crime movies. More precisely, this study aims at classifying the different types of expansion used in subtitles as well as investigating the appropriateness or inappropriateness of the application of each type. To achieve this end, three movies; namely, The Net (1995), Contact (1997) and Mission Impossible 2 (2000), available with Persian subtitles, were selected for the study. To collect the data, the above mentioned movies were watched and those parts of the Persian subtitles in which expansion had been used were identified and extracted along with their English dialogs. Then, the extracted Persian subtitles were classified based on the reason that led to expansion in each case. Next, the appropriateness or inappropriateness of using expansion in the extracted Persian subtitles was descriptively investigated. Finally, an equivalent not containing any expansion was proposed for those cases in which the meaning could be fully transferred without this strategy. The findings of the study indicated that the reasons range from explicitation (explicitation of visual, co-textual and contextual information), mistranslation and paraphrasing to the preferences of subtitlers. Furthermore, it was found that the employment of expansion strategy was inappropriate in all cases except for those caused by explicitation of contextual information since correct and shorter equivalents which were equally capable of conveying the intended meaning could be posited for the original dialogs.Keywords: Audiovisual translation, English crime movies, expansion strategies, Persian subtitles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2254157 High Accuracy ESPRIT-TLS Technique for Wind Turbine Fault Discrimination
Authors: Saad Chakkor, Mostafa Baghouri, Abderrahmane Hajraoui
Abstract:
ESPRIT-TLS method appears a good choice for high resolution fault detection in induction machines. It has a very high effectiveness in the frequency and amplitude identification. Contrariwise, it presents a high computation complexity which affects its implementation in real time fault diagnosis. To avoid this problem, a Fast-ESPRIT algorithm that combined the IIR band-pass filtering technique, the decimation technique and the original ESPRIT-TLS method was employed to enhance extracting accurately frequencies and their magnitudes from the wind stator current with less computation cost. The proposed algorithm has been applied to verify the wind turbine machine need in the implementation of an online, fast, and proactive condition monitoring. This type of remote and periodic maintenance provides an acceptable machine lifetime, minimize its downtimes and maximize its productivity. The developed technique has evaluated by computer simulations under many fault scenarios. Study results prove the performance of Fast- ESPRIT offering rapid and high resolution harmonics recognizing with minimum computation time and less memory cost.
Keywords: Spectral Estimation, ESPRIT-TLS, Real Time, Diagnosis, Wind Turbine Faults, Band-Pass Filtering, Decimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2261156 The Ecological Footprint of Tourism in Jalapão/TO/Brazil
Authors: Mary L. G. S. Senna, Afonso R. Aquino
Abstract:
The development of tourism causes negative impacts on the environment. It is in this context, through the Ecological Footprint (EF) method that this study aimed to characterize the impacts of ecotourism on the community of Mateiros, Jalapão, Brazil. The EF, which consisted in its original a method to construct a land use matrix, considering some major categories of human consumption such as food, housing, transportation, consumer goods and services, and six other categories from the main land use which are divided into the topics: land use, degraded environment, gardens, fertile land, pasture and forests protected by the government. The main objective of this index is to calculate the land area required for the production and maintenance of goods and services consumed by a community. The field research was conducted throughout the year of 2014 until July 2015. After the calculations of each category, these components were added according to the presented method in order to determine the annual EF of the tourism sector in Mateiros. The results show that the EF resulting from tourism in Mateiros is 2,194.22 hectares of land required for tourism activities in the region. The EF of tourism was considered high, nevertheless, if it is added the total of hectares needed annually for tourism activities, the result found would be 2,194.22 hectares needed to absorb the CO2 emissions generated in the region directly from the tourism sector.
Keywords: Sustainable tourism, tourism ecological footprint, Jalapão/TO/Brazil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016155 Creative Art Practice in Response to Climate Change: How Art Transforms and Frames New Approaches to Speculative Ecological and Sustainable Futures
Authors: Wenwen Liu, Robert Burton, Simon McKeown
Abstract:
Climate change is seriously threatening human security and development, leading to global warming and economic, political, and social chaos. Many artists have created visual responses that challenge perceptions on climate change, actively guiding people to think about the climate issues and potential crises after urban industrialization and explore positive solutions. This project is an interdisciplinary and intertextual study where art practice is informed by culture, philosophy, psychology, ecology, and science. By correlating theory and artistic practice, it studies how art practice creates a visual way of understanding climate issues and uses art as a way of exploring speculative futures. In the context of practical-based research, arts-based practice as research and creative practice as interdisciplinary research are applied alternately to seek the original solution and new knowledge. Through creative art practice, this project has established visual ways of looking at climate change and has developed it into a model to generate more possibilities, an alternative social imagination. It not only encourages people to think and find a sustainable speculative future conducive to all species but also proves that people have the ability to realize positive futures.
Keywords: Climate change, creative practice as interdisciplinary research, arts-based practice as research, creative art practice, speculative future.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650154 Intensifier as Changed from the Impolite Word in Thai
Authors: Methawee Yuttapongtada
Abstract:
Intensifier is the linguistic term and device that is generally found in different languages in order to enhance and give additional quantity, quality or emotion to the words of each language. In fact, each language in the world has both of the similar and dissimilar intensifying device. More specially, the wide variety of intensifying device is used for Thai language and one of those is usage of the impolite word or the word that used to mean something negative as intensifier. The data collection in this study was done throughout the spoken language style by collecting from intensifiers regarded as impolite words because these words as employed in the other contexts will be held as the rude, swear words or the words with negative meaning. Then, backward study to the past was done in order to consider the historical change. Explanation of the original meaning and the contexts of words use from the past till the present time were done by use of both textual documents and dictionaries available in different periods. It was found that regarding the semantics and pragmatic aspects, subjectification also is the significant motivation that changed the impolite words to intensifiers. At last, it can explain pathway of the semantic change of these very words undoubtedly. Moreover, it is found that use tendency in the impolite word or the word that used to mean something negative will more be increased and this phenomenon is commonly found in many languages in the world and results of this research may support to the belief that human language in the world is universal and the same still reflected that human has the fundamental thought as the same to each other basically.
Keywords: Impolite word, intensifier, Thai, semantic change.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315153 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman
Abstract:
With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.Keywords: Band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173152 Optimizing of Fuzzy C-Means Clustering Algorithm Using GA
Authors: Mohanad Alata, Mohammad Molhim, Abdullah Ramini
Abstract:
Fuzzy C-means Clustering algorithm (FCM) is a method that is frequently used in pattern recognition. It has the advantage of giving good modeling results in many cases, although, it is not capable of specifying the number of clusters by itself. In FCM algorithm most researchers fix weighting exponent (m) to a conventional value of 2 which might not be the appropriate for all applications. Consequently, the main objective of this paper is to use the subtractive clustering algorithm to provide the optimal number of clusters needed by FCM algorithm by optimizing the parameters of the subtractive clustering algorithm by an iterative search approach and then to find an optimal weighting exponent (m) for the FCM algorithm. In order to get an optimal number of clusters, the iterative search approach is used to find the optimal single-output Sugenotype Fuzzy Inference System (FIS) model by optimizing the parameters of the subtractive clustering algorithm that give minimum least square error between the actual data and the Sugeno fuzzy model. Once the number of clusters is optimized, then two approaches are proposed to optimize the weighting exponent (m) in the FCM algorithm, namely, the iterative search approach and the genetic algorithms. The above mentioned approach is tested on the generated data from the original function and optimal fuzzy models are obtained with minimum error between the real data and the obtained fuzzy models.Keywords: Fuzzy clustering, Fuzzy C-Means, Genetic Algorithm, Sugeno fuzzy systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3265151 ANN Based Currency Recognition System using Compressed Gray Scale and Application for Sri Lankan Currency Notes - SLCRec
Authors: D. A. K. S. Gunaratna, N. D. Kodikara, H. L. Premaratne
Abstract:
Automatic currency note recognition invariably depends on the currency note characteristics of a particular country and the extraction of features directly affects the recognition ability. Sri Lanka has not been involved in any kind of research or implementation of this kind. The proposed system “SLCRec" comes up with a solution focusing on minimizing false rejection of notes. Sri Lankan currency notes undergo severe changes in image quality in usage. Hence a special linear transformation function is adapted to wipe out noise patterns from backgrounds without affecting the notes- characteristic images and re-appear images of interest. The transformation maps the original gray scale range into a smaller range of 0 to 125. Applying Edge detection after the transformation provided better robustness for noise and fair representation of edges for new and old damaged notes. A three layer back propagation neural network is presented with the number of edges detected in row order of the notes and classification is accepted in four classes of interest which are 100, 500, 1000 and 2000 rupee notes. The experiments showed good classification results and proved that the proposed methodology has the capability of separating classes properly in varying image conditions.Keywords: Artificial intelligence, linear transformation and pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2836150 Transform-Domain Rate-Distortion Optimization Accelerator for H.264/AVC Video Encoding
Authors: Mohammed Golam Sarwer, Lai Man Po, Kai Guo, Q.M. Jonathan Wu
Abstract:
In H.264/AVC video encoding, rate-distortion optimization for mode selection plays a significant role to achieve outstanding performance in compression efficiency and video quality. However, this mode selection process also makes the encoding process extremely complex, especially in the computation of the ratedistortion cost function, which includes the computations of the sum of squared difference (SSD) between the original and reconstructed image blocks and context-based entropy coding of the block. In this paper, a transform-domain rate-distortion optimization accelerator based on fast SSD (FSSD) and VLC-based rate estimation algorithm is proposed. This algorithm could significantly simplify the hardware architecture for the rate-distortion cost computation with only ignorable performance degradation. An efficient hardware structure for implementing the proposed transform-domain rate-distortion optimization accelerator is also proposed. Simulation results demonstrated that the proposed algorithm reduces about 47% of total encoding time with negligible degradation of coding performance. The proposed method can be easily applied to many mobile video application areas such as a digital camera and a DMB (Digital Multimedia Broadcasting) phone.Keywords: Context-adaptive variable length coding (CAVLC), H.264/AVC, rate-distortion optimization (RDO), sum of squareddifference (SSD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609149 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves
Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira
Abstract:
Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.
Keywords: Artificial neural networks, digital image processing, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557148 Choosing Local Organic Food: Consumer Motivations and Ethical Spaces
Authors: Artur Saraiva, Moritz von Schwedler, Emília Fernandes
Abstract:
In recent years, the organic sector has increased significantly. However, with the ‘conventionalization’ of these products, it has been questioned whether these products have been losing their original vision. Accordingly, this research based on 31 phenomenological interviews with committed organic consumers in urban and rural areas of Portugal, aims to analyse how ethical motivations and ecological awareness are related to organic food consumption. The content thematic analysis highlights aspects related to society and environmental concerns. On an individual level, the importance of internal coherence, peace of mind and balance that these consumers find in the consumption of local organic products was stressed. For these consumers, local organic products consumption made for significant changes in their lives, aiding in the establishment of a green identity, and involves a certain philosophy of life. This vision of an organic lifestyle is grounded in a political and ecological perspective, beyond the usual organic definition, as a ‘post-organic era’. The paper contributes to better understand how an ideological environmental discourse allows highlighting the relationship between consumers’ environmental concerns and the politics of food, resulting in a possible transition to new sustainable consumption practices.
Keywords: Organic consumption, localism, content thematic analysis, pro-environmental discourse, political consumption, Portugal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450147 Approach for Demonstrating Reliability Targets for Rail Transport during Low Mileage Accumulation in the Field: Methodology and Case Study
Authors: Nipun Manirajan, Heeralal Gargama, Sushil Guhe, Manoj Prabhakaran
Abstract:
In railway industry, train sets are designed based on contractual requirements (mission profile), where reliability targets are measured in terms of mean distance between failures (MDBF). However, during the beginning of revenue services, trains do not achieve the designed mission profile distance (mileage) within the timeframe due to infrastructure constraints, scarcity of commuters or other operational challenges thereby not respecting the original design inputs. Since trains do not run sufficiently and do not achieve the designed mileage within the specified time, car builder has a risk of not achieving the contractual MDBF target. This paper proposes a constant failure rate based model to deal with the situations where mileage accumulation is not a part of the design mission profile. The model provides appropriate MDBF target to be demonstrated based on actual accumulated mileage. A case study of rolling stock running in the field is undertaken to analyze the failure data and MDBF target demonstration during low mileage accumulation. The results of case study prove that with the proposed method, reliability targets are achieved under low mileage accumulation.Keywords: Mean distance between failures, mileage based reliability, reliability target normalization, rolling stock reliability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1193146 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with a multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) decompilation of the video stream into individual frames; (2) establishing the interior camera orientation parameters; (3) determining the relative orientation parameters for each video frame with respect to each other; (4) finding the absolute orientation parameters, using a self-calibration bundle and adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a mosaic image of the test area, which is then merged with a well referenced existing digital map for the purpose of geo-referencing and aerial surveillance. A test field located in Abuja, Nigeria was used to evaluate our method. Video and telemetry data were collected for about fifteen minutes, and they were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images is more accurate when compared with those from original perspective images when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 metres.Keywords: Geo-referencing, ortho-rectification, video frame, self-calibration, UAV, target tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619145 Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique
Authors: J.V.R.Ravindra, M.B.Srinivas,
Abstract:
Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.Keywords: Model order Reduction, RLC, crosstalk
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655144 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier
Authors: Maninder Kaur Gill, Alpana Agarwal
Abstract:
It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.Keywords: Device under test, open-loop voltage gain, operational amplifier, test circuit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3343143 An Experimental Study to Mitigate Swelling Pressure of Expansive Tabuk Shale, Saudi Arabia
Authors: A. A. Embaby, A. Abu Halawa, M. Ramadan
Abstract:
In Kingdom of Saudi Arabia, there are several areas where expansive soil exists in the form of variable-thicknesses layers in the developed regions. Severe distress to infrastructures can be caused by the development of heave and swelling pressure in this kind of expansive shale. Among the various techniques for expansive soil mitigation, the removal and replacement technique is very popular for lightly loaded structures and shallow foundations. This paper presents the result of an experimental study conducted for evaluating the effect of type and thickness of the cushion soils on mitigation of swelling characteristics of expanded shale. Seven undisturbed shale samples collected from Al Qadsiyah district, which is located in the Tabuk town north Kingdom of Saudi Arabia, are treated with two types of cushion coarse-grained sediments (CCS); sand and gravel. Each type is represented with three thicknesses, 22%, 33% and 44% in relation to the depth of the active zone. The test results indicated that the replacement of expansive shale by CCS reduces the swelling potential and pressure. It is found that the reduction in swelling depends on the type and thickness of CCS. The treatment by removing the original expansive shale and replacing it by cushion sand with 44% thickness reduced the swelling potential and pressure of about 53.29% and 62.78 %, respectively.
Keywords: Cushion coarse-grained sediments, expansive soil, Saudi Arabia, swelling pressure, Tabuk Shale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544142 Effect of Cooling Rate on base Metals Recovery from Copper Matte Smelting Slags
Authors: N. Tshiongo , R K.K. Mbaya , K Maweja, L.C. Tshabalala
Abstract:
Slag sample from copper smelting operation in a water jacket furnace from DRC plant was used. The study intends to determine the effect of cooling in the extraction of base metals. The cooling methods investigated were water quenching, air cooling and furnace cooling. The latter cooling ways were compared to the original as received slag. It was observed that, the cooling rate of the slag affected the leaching of base metals as it changed the phase distribution in the slag and the base metals distribution within the phases. It was also found that fast cooling of slag prevented crystallization and produced an amorphous phase that encloses the base metals. The amorphous slags from the slag dumps were more leachable in acidic medium (HNO3) which leached 46%Cu, 95% Co, 85% Zn, 92% Pb and 79% Fe with no selectivity at pH0, than in basic medium (NH4OH). The leachability was vice versa for the modified slags by quenching in water which leached 89%Cu with a high selectivity as metal extractions are less than 1% for Co, Zn, Pb and Fe at ambient temperature and pH12. For the crystallized slags, leaching of base metals increased with the increase of temperature from ambient temperature to 60°C and decreased at the higher temperature of 80°C due to the evaporation of the ammonia solution used for basic leaching, the total amounts of base metals that were leached in slow cooled slags were very low compared to the quenched slag samples.Keywords: copper slag, leaching, amorphous, cooling rate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3770141 Automatic Detection of Defects in Ornamental Limestone Using Wavelets
Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas
Abstract:
A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.
Keywords: Automatic detection, wavelets, defects, fracture lines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173