Search results for: Knowledge representation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2336

Search results for: Knowledge representation.

1346 Resilient Manufacturing: Use of Augmented Reality to Advance Training and Operating Practices in Manual Assembly

Authors: L. C. Moreira, M. Kauffman

Abstract:

This paper outlines the results of an experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance (or work instructions) of highly customised and high-risk manual operations. The focus is on human operators’ training effectiveness and performance and the aim is to test if such technologies can support enhancing the knowledge retention levels and accuracy of task execution to improve health and safety (H&S). An AR enhanced assembly method is proposed and experimentally tested using a real industrial process as case study for electric vehicles’ (EV) battery module assembly. The experimental results revealed that the proposed method improved the training practices and performance through increases in the knowledge retention levels from 40% to 84%, and accuracy of task execution from 20% to 71%, when compared to the traditional paper-based method. The results of this research validate and demonstrate how emerging technologies are advancing the choice for manual, hybrid or fully automated processes by promoting the XR-assisted processes, and the connected worker (a vision for Industry 4 and 5.0), and supporting manufacturing become more resilient in times of constant market changes.

Keywords: Augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly 4.0, industry 5.0, smart training, battery assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 343
1345 Dynamic Modeling of Underwater Manipulator and Its Simulation

Authors: Ruiheng Li, Amir Parsa Anvar, Amir M. Anvar, Tien-Fu Lu

Abstract:

High redundancy and strong uncertainty are two main characteristics for underwater robotic manipulators with unlimited workspace and mobility, but they also make the motion planning and control difficult and complex. In order to setup the groundwork for the research on control schemes, the mathematical representation is built by using the Denavit-Hartenberg (D-H) method [9]&[12]; in addition to the geometry of the manipulator which was studied for establishing the direct and inverse kinematics. Then, the dynamic model is developed and used by employing the Lagrange theorem. Furthermore, derivation and computer simulation is accomplished using the MATLAB environment. The result obtained is compared with mechanical system dynamics analysis software, ADAMS. In addition, the creation of intelligent artificial skin using Interlink Force Sensing ResistorTM technology is presented as groundwork for future work

Keywords: Manipulator System, Robot, AUV, Denavit- Hartenberg method Lagrange theorem, MALTAB, ADAMS, Direct and Inverse Kinematics, Dynamics, PD Control-law, Interlink Force Sensing ResistorTM, intelligent artificial skin system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3472
1344 Several Aspects of the Conceptual Framework of Financial Reporting

Authors: Nadezhda Kvatashidze

Abstract:

The conceptual framework of International Financial Reporting Standards determines the basic principles of accounting. The said principles have multiple applications, with professional judgments being one of those. Recognition and assessment of the information contained in financial reporting, especially so the somewhat uncertain events and transactions and/or the ones regarding which there is no standard or interpretation are based on professional judgments. Professional judgments aim at the formulation of expert assumptions regarding the specifics of the circumstances and events to be entered into the report based on the conceptual framework terms and principles. Experts have to make a choice in favor of one of the aforesaid and simulate the situations applying multi-variant accounting estimates and judgment. In making the choice, one should consider all the factors, which may help represent the information in the best way possible. Professional judgment determines the relevance and faithful representation of the presented information, which makes it more useful for the existing and potential investors. In order to assess the prospected net cash flows, the information must be predictable and reliable. The publication contains critical analysis of the aforementioned problems. The fact that the International Financial Reporting Standards are developed continuously makes the issue all the more important and that is another point discussed in the study.

Keywords: Conceptual Framework for financial reporting, Qualitative characteristics of financial information, Professional judgement, Cost constraints, Financial reporting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
1343 Investigating Interference Errors Made by Azzawia University 1st year Students of English in Learning English Prepositions

Authors: Aimen Mohamed Almaloul

Abstract:

The main focus of this study is investigating the interference of Arabic in the use of English prepositions by Libyan university students. Prepositions in the tests used in the study were categorized, according to their relation to Arabic, into similar Arabic and English prepositions (SAEP), dissimilar Arabic and English prepositions (DAEP), Arabic prepositions with no English counterparts (APEC), and English prepositions with no Arabic counterparts (EPAC).

The subjects of the study were the first year university students of the English department, Sabrata Faculty of Arts, Azzawia University; both males and females, and they were 100 students. The basic tool for data collection was a test of English prepositions; students are instructed to fill in the blanks with the correct prepositions and to put a zero (0) if no preposition was needed. The test was then handed to the subjects of the study.

The test was then scored and quantitative as well as qualitative results were obtained. Quantitative results indicated the number, percentages and rank order of errors in each of the categories and qualitative results indicated the nature and significance of those errors and their possible sources. Based on the obtained results the researcher could detect that students made more errors in the EPAC category than the other three categories and these errors could be attributed to the lack of knowledge of the different meanings of English prepositions. This lack of knowledge forced the students to adopt what is called the strategy of transfer.

Keywords: Foreign language acquisition, foreign language learning, interference system, interlanguage system, mother tongue interference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5012
1342 Ontology-based Domain Modelling for Consistent Content Change Management

Authors: Muhammad Javed, Yalemisew M. Abgaz, Claus Pahl

Abstract:

Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces.

Keywords: Consistent Content Management, Impact Categorisation, Trace Model, Ontology Evolution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
1341 A Biomimetic Structural Form: Developing a Paradigm to Attain Vital Sustainability in Tall Architecture

Authors: Osama Al-Sehail

Abstract:

This paper argues for sustainability as a necessity in the evolution of tall architecture. It provides a different mode for dealing with sustainability in tall architecture, taking into consideration the speciality of its typology. To this end, the article develops a Biomimetic Structural Form as a paradigm to attain Vital Sustainability. A Biomimetic Structural Form, which is derived from the amalgamation of biomimicry as an approach for sustainability defining nature as source of knowledge and inspiration in solving humans’ problems and a Structural Form as a catalyst for evolving tall architecture, is a dynamic paradigm emerging from a conceptualizing and morphological process. A Biomimetic Structural Form is a flow system whose different forces and functions tend to be “better”, more "fit", to “survive”, and to be efficient. Through geometry and function—the two aspects of knowledge extracted from nature—the attributes of the Biomimetic Structural Form are formulated. Vital Sustainability is the survival level of sustainability in natural systems through which a system enhances the performance of its internal working and its interaction with the external environment. A Biomimetic Structural Form, in this context, is a medium for evolving tall architecture to emulate natural models in their ways of coexistence with the environment. As an integral part of this article, the sustainable super tall building 3Ts is discussed as a case study of applying Biomimetic Structural Form.   

Keywords: Biomimicry, design in nature, high-rise buildings, sustainability, structural form, tall architecture, vital sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
1340 Statistics over Lyapunov Exponents for Feature Extraction: Electroencephalographic Changes Detection Case

Authors: Elif Derya UBEYLI, Inan GULER

Abstract:

A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.

Keywords: Chaotic signal, Electroencephalogram (EEG) signals, Feature extraction/selection, Lyapunov exponents

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
1339 Superior Performances of the Neural Network on the Masses Lesions Classification through Morphological Lesion Differences

Authors: U. Bottigli, R.Chiarucci, B. Golosio, G.L. Masala, P. Oliva, S.Stumbo, D.Cascio, F. Fauci, M. Glorioso, M. Iacomi, R. Magro, G. Raso

Abstract:

Purpose of this work is to develop an automatic classification system that could be useful for radiologists in the breast cancer investigation. The software has been designed in the framework of the MAGIC-5 collaboration. In an automatic classification system the suspicious regions with high probability to include a lesion are extracted from the image as regions of interest (ROIs). Each ROI is characterized by some features based generally on morphological lesion differences. A study in the space features representation is made and some classifiers are tested to distinguish the pathological regions from the healthy ones. The results provided in terms of sensitivity and specificity will be presented through the ROC (Receiver Operating Characteristic) curves. In particular the best performances are obtained with the Neural Networks in comparison with the K-Nearest Neighbours and the Support Vector Machine: The Radial Basis Function supply the best results with 0.89 ± 0.01 of area under ROC curve but similar results are obtained with the Probabilistic Neural Network and a Multi Layer Perceptron.

Keywords: Neural Networks, K-Nearest Neighbours, Support Vector Machine, Computer Aided Detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
1338 Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition

Authors: Mohammed Rziza, Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali, Driss Aboutajdine

Abstract:

In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.

Keywords: Curvelet, Linear Discriminant Analysis (LDA) , Contourlet, Discreet Wavelet Transform, DWT, Block-based analysis, face recognition (FR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
1337 A Monte Carlo Method to Data Stream Analysis

Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham

Abstract:

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.

Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
1336 The Influence of Gender on Job-Competencies Requirements of Chemical-Based Industries and Undergraduate-Competencies Acquisition of Chemists in South West, Nigeria

Authors: Rachael Olatoun Okunuga

Abstract:

Developing young people’s employability is a key policy issue for ensuring their successful transition to the labour market and their access to career oriented employment. The youths of today irrespective of their gender need to acquire the knowledge, skills and attitudes that will enable them to create or find jobs as well as cope with unpredictable labour market changes throughout their working lives. In a study carried out to determine the influence of gender on job-competencies requirements of chemical-based industries and undergraduate-competencies acquisition by chemists working in the industries, all chemistry graduates working in twenty (20) chemical-based industries that were randomly selected from six sectors of chemical-based industries in Lagos and Ogun States of Nigeria were administered with Job-competencies required and undergraduate-competencies acquired assessment questionnaire. The data were analysed using means and independent sample t-test. The findings revealed that the population of female chemists working in chemical-based industries is low compared with the number of male chemists; furthermore, job-competencies requirements are found not to be gender biased while there is no significant difference in undergraduate-competencies acquisition of male and female chemists. This suggests that females should be given the same opportunity of employment in chemical-based industries as their male counterparts. The study also revealed the level of acquisition of undergraduate competencies as related to the needs of chemicalbased industries.

Keywords: Acquired, attitude, employability, knowledge, required, skill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913
1335 Perception of Secondary Schools’ Students on Computer Education in Federal Capital Territory (FCT-Abuja), Nigeria

Authors: Salako Emmanuel Adekunle

Abstract:

Computer education is referred to as the knowledge and ability to use computers and related technology efficiently, with a range of skills covering levels from basic use to advance. Computer continues to make an ever-increasing impact on all aspect of human endeavours such as education. With numerous benefits of computer education, what are the insights of students on computer education? This study investigated the perception of senior secondary school students on computer education in Federal Capital Territory (FCT), Abuja, Nigeria. A sample of 7500 senior secondary schools students was involved in the study, one hundred (100) private and fifty (50) public schools within FCT. They were selected by using simple random sampling technique. A questionnaire [PSSSCEQ] was developed and validated through expert judgement and reliability coefficient of 0.84 was obtained. It was used to gather relevant data on computer education. Findings confirmed that the students in the FCT had positive perception on computer education. Some factors were identified that affect students’ perception on computer education. The null hypotheses were tested using t-test and ANOVA statistical analyses at 0.05 level of significance. Based on these findings, some recommendations were made which include competent teachers should be employed into all secondary schools. This will help students to acquire relevant knowledge in computer education, technological supports should be provided to all secondary schools; this will help the users (students) to solve specific problems in computer education and financial supports should be provided to procure computer facilities that will enhance the teaching and the learning of computer education.

Keywords: Computer education, perception, secondary school, students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4036
1334 Component Based Framework for Authoring and Multimedia Training in Mathematics

Authors: Ion Smeureanu, Marian Dardala, Adriana Reveiu

Abstract:

The new programming technologies allow for the creation of components which can be automatically or manually assembled to reach a new experience in knowledge understanding and mastering or in getting skills for a specific knowledge area. The project proposes an interactive framework that permits the creation, combination and utilization of components that are specific to mathematical training in high schools. The main framework-s objectives are: • authoring lessons by the teacher or the students; all they need are simple operating skills for Equation Editor (or something similar, or Latex); the rest are just drag & drop operations, inserting data into a grid, or navigating through menus • allowing sonorous presentations of mathematical texts and solving hints (easier understood by the students) • offering graphical representations of a mathematical function edited in Equation • storing of learning objects in a database • storing of predefined lessons (efficient for expressions and commands, the rest being calculations; allows a high compression) • viewing and/or modifying predefined lessons, according to the curricula The whole thing is focused on a mathematical expressions minicompiler, storing the code that will be later used for different purposes (tables, graphics, and optimisations). Programming technologies used. A Visual C# .NET implementation is proposed. New and innovative digital learning objects for mathematics will be developed; they are capable to interpret, contextualize and react depending on the architecture where they are assembled.

Keywords: Adaptor, automatic assembly learning component and user control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
1333 Speaker Identification Using Admissible Wavelet Packet Based Decomposition

Authors: Mangesh S. Deshpande, Raghunath S. Holambe

Abstract:

Mel Frequency Cepstral Coefficient (MFCC) features are widely used as acoustic features for speech recognition as well as speaker recognition. In MFCC feature representation, the Mel frequency scale is used to get a high resolution in low frequency region, and a low resolution in high frequency region. This kind of processing is good for obtaining stable phonetic information, but not suitable for speaker features that are located in high frequency regions. The speaker individual information, which is non-uniformly distributed in the high frequencies, is equally important for speaker recognition. Based on this fact we proposed an admissible wavelet packet based filter structure for speaker identification. Multiresolution capabilities of wavelet packet transform are used to derive the new features. The proposed scheme differs from previous wavelet based works, mainly in designing the filter structure. Unlike others, the proposed filter structure does not follow Mel scale. The closed-set speaker identification experiments performed on the TIMIT database shows improved identification performance compared to other commonly used Mel scale based filter structures using wavelets.

Keywords: Speaker identification, Wavelet transform, Feature extraction, MFCC, GMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
1332 Memorabilia of Suan Sunandha through Interactive User Interface

Authors: Nalinee Sophatsathit

Abstract:

The objectives of memorabilia of Suan Sunandha are to develop a general knowledge presentation about the historical royal garden through interactive graphic simulation technique and to employ high-functionality context in enhancing interactive user navigation. The approach infers non-intrusive display of relevant history in response to situational context. User’s navigation runs through the virtual reality campus, consisting of new and restored buildings. A flash back presentation of information pertaining to the history in the form of photos, paintings, and textual descriptions are displayed along each passing-by building. To keep the presentation lively, graphical simulation is created in a serendipity game play so that the user can both learn and enjoy the educational tour. The benefits of this human-computer interaction development are two folds. First, lively presentation technique and situational context modeling are developed that entail a usable paradigm of knowledge and information presentation combinations. Second, cost effective training and promotion for both internal personnel and public visitors to learn and keep informed of this historical royal garden can be furnished without the need for a dedicated public relations service. Future improvement on graphic simulation and ability based display can extend this work to be more realistic, user-friendly, and informative for all.

Keywords: Interactive user navigation, high-functionality context, situational context, human-computer interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
1331 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: Multi-agent system, safety analysis, safety model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1069
1330 Visualizing Imaging Pathways after Anatomy-Specific Follow-Up Imaging Recommendations

Authors: Thusitha Mabotuwana, Christopher S. Hall

Abstract:

Radiologists routinely make follow-up imaging recommendations, usually based on established clinical practice guidelines, such as the Fleischner Society guidelines for managing lung nodules. In order to ensure optimal care, it is important to make guideline-compliant recommendations, and also for patients to follow-up on these imaging recommendations in a timely manner. However, determining such compliance rates after a specific finding has been observed usually requires many time-consuming manual steps. To address some of these limitations with current approaches, in this paper we discuss a methodology to automatically detect finding-specific follow-up recommendations from radiology reports and create a visualization for relevant subsequent exams showing the modality transitions. Nearly 5% of patients who had a lung related follow-up recommendation continued to have at least eight subsequent outpatient CT exams during a seven year period following the recommendation. Radiologist and section chiefs can use the proposed tool to better understand how a specific patient population is being managed, identify possible deviations from established guideline recommendations and have a patient-specific graphical representation of the imaging pathways for an abstract view of the overall treatment path thus far.

Keywords: Follow-up recommendations, care pathways, imaging pathway visualization, follow-up tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1028
1329 Morpho-Phonological Modelling in Natural Language Processing

Authors: Eleni Galiotou, Angela Ralli

Abstract:

In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.

Keywords: Morpho-Phonology, Natural Language Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2111
1328 Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition

Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang

Abstract:

Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit level and digi -level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very large scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.

Keywords: Digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
1327 A Corpus-Based Approach to Understanding Market Access in Fisheries and Aquaculture: A Systematic Literature Review

Authors: Cheryl Marie Cordeiro

Abstract:

Although fisheries and aquaculture studies might seem marginal to international business (IB) studies in general, fisheries and aquaculture IB (FAIB) management is currently facing increasing pressure to meet global demand and consumption for fish in the next coming decades. In part address to this challenge, the purpose of this systematic review of literature (SLR) study is to investigate the use of the term ‘market access’ in its context of use in the generic literature and business sector discourse, in comparison to the more specific literature and discourse in fisheries, aquaculture and seafood. This SLR aims to uncover the knowledge/interest gaps between the academic subject discourses and business sector practices. Corpus driven in methodology and using a triangulation method of three different text analysis software including AntConc, VOSviewer and Web of Science (WoS) analytics, the SLR results indicate a gap in conceptual knowledge and business practices in how ‘market access’ is conceived and used in the context of the pharmaceutical healthcare industry and FAIB research and practice. While it is acknowledged that the product orientation of different business sectors might differ, this SLR study works with the assumption that both business sectors are global in orientation. These business sectors are complex in their operations from product to market. This SLR suggests a conceptual model in understanding the challenges, the potential barriers as well as avenues for solutions to developing market access for FAIB.

Keywords: Market access, fisheries and aquaculture, international business, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
1326 Interactive Chinese Character Learning System though Pictograph Evolution

Authors: J.H. Low, C.O. Wong, E.J. Han, K.R Kim K.C. Jung, H.K. Yang

Abstract:

This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.

Keywords: Computer-based learning, Chinese character, pictograph evolution, skeletonization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
1325 Highlighting Document's Structure

Authors: Sylvie Ratté, Wilfried Njomgue, Pierre-André Ménard

Abstract:

In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).

Keywords: Information retrieval, document structures, symbolic grammars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209
1324 Web Content Mining: A Solution to Consumer's Product Hunt

Authors: Syed Salman Ahmed, Zahid Halim, Rauf Baig, Shariq Bashir

Abstract:

With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.

Keywords: Data mining, web mining, search engines, knowledge discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
1323 Compact Binary Tree Representation of Logic Function with Enhanced Throughput

Authors: Padmanabhan Balasubramanian, C. Ardil

Abstract:

An effective approach for realizing the binary tree structure, representing a combinational logic functionality with enhanced throughput, is discussed in this paper. The optimization in maximum operating frequency was achieved through delay minimization, which in turn was possible by means of reducing the depth of the binary network. The proposed synthesis methodology has been validated by experimentation with FPGA as the target technology. Though our proposal is technology independent, yet the heuristic enables better optimization in throughput even after technology mapping for such Boolean functionality; whose reduced CNF form is associated with a lesser literal cost than its reduced DNF form at the Boolean equation level. For cases otherwise, our method converges to similar results as that of [12]. The practical results obtained for a variety of case studies demonstrate an improvement in the maximum throughput rate for Spartan IIE (XC2S50E-7FT256) and Spartan 3 (XC3S50-4PQ144) FPGA logic families by 10.49% and 13.68% respectively. With respect to the LUTs and IOBUFs required for physical implementation of the requisite non-regenerative logic functionality, the proposed method enabled savings to the tune of 44.35% and 44.67% respectively, over the existing efficient method available in literature [12].

Keywords: Binary logic tree, FPGA based design, Boolean function, Throughput rate, CNF, DNF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
1322 Information Dissemination System (IDS) Based E-Learning in Agricultural of Iran (Perception of Iranian Extension Agents)

Authors: A. R. Ommani, M. Chizari

Abstract:

The purpose of the study reported here was designing Information Dissemination System (IDS) based E-learning in agricultural of Iran. A questionnaire was developed to designing Information Dissemination System. The questionnaire was distributed to 96 extension agents who work for Management of Extension and Farming System of Khuzestan province of Iran. Data collected were analyzed using the Statistical Package for the Social Sciences (SPSS). Appropriate statistical procedures for description (frequencies, percent, means, and standard deviations) were used. In this study there was a significant relationship between the age , IT skill and knowledge, years of extension work, the extend of information seeking motivation, level of job satisfaction and level of education with use of information technology by extension agent. According to extension agents five factors were ranked respectively as five top essential items to designing Information Dissemination System (IDS) based E-learning in agricultural of Iran. These factors include: 1) Establish communication between farmers, coordinators (extension agents), agricultural experts, research centers, and community by information technology. 2) The communication between all should be mutual. 3) The information must be based farmers need. 4) Internet used as a facility to transfer the advanced agricultural information to the farming community. 5) Farmers can be illiterate and speak a local and they are not expected to use the system directly. Knowledge produced by the agricultural scientist must be transformed in to computer understandable presentation. To designing Information Dissemination System, electronic communication, in the agricultural society and rural areas must be developed. This communication must be mutual between all factors.

Keywords: E-learning, information dissemination system, information technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365
1321 Using Mean-Shift Tracking Algorithms for Real-Time Tracking of Moving Images on an Autonomous Vehicle Testbed Platform

Authors: Benjamin Gorry, Zezhi Chen, Kevin Hammond, Andy Wallace, Greg Michaelson

Abstract:

This paper describes new computer vision algorithms that have been developed to track moving objects as part of a long-term study into the design of (semi-)autonomous vehicles. We present the results of a study to exploit variable kernels for tracking in video sequences. The basis of our work is the mean shift object-tracking algorithm; for a moving target, it is usual to define a rectangular target window in an initial frame, and then process the data within that window to separate the tracked object from the background by the mean shift segmentation algorithm. Rather than use the standard, Epanechnikov kernel, we have used a kernel weighted by the Chamfer distance transform to improve the accuracy of target representation and localization, minimising the distance between the two distributions in RGB color space using the Bhattacharyya coefficient. Experimental results show the improved tracking capability and versatility of the algorithm in comparison with results using the standard kernel. These algorithms are incorporated as part of a robot test-bed architecture which has been used to demonstrate their effectiveness.

Keywords: Hume, functional programming, autonomous vehicle, pioneer robot, vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
1320 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: Data mining, digital libraries, digital preservation, file format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638
1319 Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter

Authors: Sandipan Chakroborty, Goutam Saha

Abstract:

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for speech related applications. On a recent contribution by authors, it has been shown that the Inverted Mel- Frequency Cepstral Coefficients (IMFCC) is useful feature set for SI, which contains complementary information present in high frequency region. This paper introduces the Gaussian shaped filter (GF) while calculating MFCC and IMFCC in place of typical triangular shaped bins. The objective is to introduce a higher amount of correlation between subband outputs. The performances of both MFCC & IMFCC improve with GF over conventional triangular filter (TF) based implementation, individually as well as in combination. With GMM as speaker modeling paradigm, the performances of proposed GF based MFCC and IMFCC in individual and fused mode have been verified in two standard databases YOHO, (Microphone Speech) and POLYCOST (Telephone Speech) each of which has more than 130 speakers.

Keywords: Gaussian Filter, Triangular Filter, Subbands, Correlation, MFCC, IMFCC, GMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2417
1318 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques

Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson

Abstract:

Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).

Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
1317 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: Simulation, visual navigation, mobile robot, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1026