Search results for: Representation Learning.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2554

Search results for: Representation Learning.

964 Connected Vertex Cover in 2-Connected Planar Graph with Maximum Degree 4 is NP-complete

Authors: Priyadarsini P. L. K, Hemalatha T.

Abstract:

This paper proves that the problem of finding connected vertex cover in a 2-connected planar graph ( CVC-2 ) with maximum degree 4 is NP-complete. The motivation for proving this result is to give a shorter and simpler proof of NP-Completeness of TRA-MLC (the Top Right Access point Minimum-Length Corridor) problem [1], by finding the reduction from CVC-2. TRA-MLC has many applications in laying optical fibre cables for data communication and electrical wiring in floor plans.The problem of finding connected vertex cover in any planar graph ( CVC ) with maximum degree 4 is NP-complete [2]. We first show that CVC-2 belongs to NP and then we find a polynomial reduction from CVC to CVC-2. Let a graph G0 and an integer K form an instance of CVC, where G0 is a planar graph and K is an upper bound on the size of the connected vertex cover in G0. We construct a 2-connected planar graph, say G, by identifying the blocks and cut vertices of G0, and then finding the planar representation of all the blocks of G0, leading to a plane graph G1. We replace the cut vertices with cycles in such a way that the resultant graph G is a 2-connected planar graph with maximum degree 4. We consider L = K -2t+3 t i=1 di where t is the number of cut vertices in G1 and di is the number of blocks for which ith cut vertex is common. We prove that G will have a connected vertex cover with size less than or equal to L if and only if G0 has a connected vertex cover of size less than or equal to K.

Keywords: NP-complete, 2-Connected planar graph, block, cut vertex

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
963 A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Authors: Ghalem Belalem, Yahya Slimani

Abstract:

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Keywords: Data Grid, replication, consistency, optimistic approach, pessimistic approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
962 A Method under Uncertain Information for the Selection of Students in Interdisciplinary Studies

Authors: José M. Merigó, Pilar López-Jurado, M.Carmen Gracia, Montserrat Casanovas

Abstract:

We present a method for the selection of students in interdisciplinary studies based on the hybrid averaging operator. We assume that the available information given in the problem is uncertain so it is necessary to use interval numbers. Therefore, we suggest a new type of hybrid aggregation called uncertain induced generalized hybrid averaging (UIGHA) operator. It is an aggregation operator that considers the weighted average (WA) and the ordered weighted averaging (OWA) operator in the same formulation. Therefore, we are able to consider the degree of optimism of the decision maker and grades of importance in the same approach. By using interval numbers, we are able to represent the information considering the best and worst possible results so the decision maker gets a more complete view of the decision problem. We develop an illustrative example of the proposed scheme in the selection of students in interdisciplinary studies. We see that with the use of the UIGHA operator we get a more complete representation of the selection problem. Then, the decision maker is able to consider a wide range of alternatives depending on his interests. We also show other potential applications that could be used by using the UIGHA operator in educational problems about selection of different types of resources such as students, professors, etc.

Keywords: Decision making, Selection of students, Uncertainty, Aggregation operators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
961 Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements

Authors: Yasmeen A. S. Essawy, Khaled Nassar

Abstract:

With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.

Keywords: Building information modeling, elemental graph data model, geometric and topological data models, and graph theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1201
960 Efficiency of the Strain Based Approach Formulation for Plate Bending Analysis

Authors: Djamal Hamadi, Sifeddine Abderrahmani, Toufik Maalem, Oussama Temami

Abstract:

In recent years many finite elements have been developed for plate bending analysis. The formulated elements are based on the strain based approach. This approach leads to the representation of the displacements by higher order polynomial terms without the need for the introduction of additional internal and unnecessary degrees of freedom. Good convergence can also be obtained when the results are compared with those obtained from the corresponding displacement based elements, having the same total number of degrees of freedom. Furthermore, the plate bending elements are free from any shear locking since they converge to the Kirchhoff solution for thin plates contrarily for the corresponding displacement based elements. In this paper the efficiency of the strain based approach compared to well known displacement formulation is presented. The results obtained by a new formulated plate bending element based on the strain approach and Kirchhoff theory are compared with some others elements. The good convergence of the new formulated element is confirmed.

Keywords: Displacement fields, finite elements, plate bending, Kirchhoff theory, strain based approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
959 A Bayesian Kernel for the Prediction of Protein- Protein Interactions

Authors: Hany Alashwal, Safaai Deris, Razib M. Othman

Abstract:

Understanding proteins functions is a major goal in the post-genomic era. Proteins usually work in context of other proteins and rarely function alone. Therefore, it is highly relevant to study the interaction partners of a protein in order to understand its function. Machine learning techniques have been widely applied to predict protein-protein interactions. Kernel functions play an important role for a successful machine learning technique. Choosing the appropriate kernel function can lead to a better accuracy in a binary classifier such as the support vector machines. In this paper, we describe a Bayesian kernel for the support vector machine to predict protein-protein interactions. The use of Bayesian kernel can improve the classifier performance by incorporating the probability characteristic of the available experimental protein-protein interactions data that were compiled from different sources. In addition, the probabilistic output from the Bayesian kernel can assist biologists to conduct more research on the highly predicted interactions. The results show that the accuracy of the classifier has been improved using the Bayesian kernel compared to the standard SVM kernels. These results imply that protein-protein interaction can be predicted using Bayesian kernel with better accuracy compared to the standard SVM kernels.

Keywords: Bioinformatics, Protein-protein interactions, Bayesian Kernel, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163
958 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
957 Automatic Reusability Appraisal of Software Components using Neuro-fuzzy Approach

Authors: Parvinder S. Sandhu, Hardeep Singh

Abstract:

Automatic reusability appraisal could be helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable components from existing legacy systems; that can save cost of developing the software from scratch. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. In this paper, we have mentioned two-tier approach by studying the structural attributes as well as usability or relevancy of the component to a particular domain. Latent semantic analysis is used for the feature vector representation of various software domains. It exploits the fact that FeatureVector codes can be seen as documents containing terms -the idenifiers present in the components- and so text modeling methods that capture co-occurrence information in low-dimensional spaces can be used. Further, we devised Neuro- Fuzzy hybrid Inference System, which takes structural metric values as input and calculates the reusability of the software component. Decision tree algorithm is used to decide initial set of fuzzy rules for the Neuro-fuzzy system. The results obtained are convincing enough to propose the system for economical identification and retrieval of reusable software components.

Keywords: Clustering, ID3, LSA, Neuro-fuzzy System, SVD

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
956 Un Pavillon – Un Monument: The Modern Palace and the Case of the U.S. Embassy in Karachi, Pakistan (1955–59)

Authors: Marcos Amado Petroli

Abstract:

This paper investigates civic representation in mid-century diplomatic buildings through the case of the U.S. Embassy in Karachi (1955-59), Pakistan, designed by the Austrian-American architect Richard Neutra (1892-1970) and the American architect Robert Alexander (1907-92). Texts, magazines, and oral histories at that time highlighted the need for a new postwar expression of American governmental architecture, leaning toward modernization, technology, and monumentality. Descriptive, structural, and historical analyses of the U.S. Embassy in Karachi revealed the emergence of a new prototypical solution for postwar diplomatic buildings: the combination of one main orthogonal block, seen as a modern-day corps de logis, and a flanking arcuated pavilion, often organized in one or two stories. Although the U.S. Embassy relied on highly industrialized techniques and abstract images of social progress, archival work at the Neutra’s archives at the University of California, Los Angeles, revealed that much of this project was adapted to vernacular elements and traditional forms—such as the intriguing use of reinforced concrete barrel vaults.

Keywords: Modern monumentality, post-WWII diplomatic buildings, theory of character, thin-shells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464
955 Accurate Time Domain Method for Simulation of Microstructured Electromagnetic and Photonic Structures

Authors: Vijay Janyani, Trevor M. Benson, Ana Vukovic

Abstract:

A time-domain numerical model within the framework of transmission line modeling (TLM) is developed to simulate electromagnetic pulse propagation inside multiple microcavities forming photonic crystal (PhC) structures. The model developed is quite general and is capable of simulating complex electromagnetic problems accurately. The field quantities can be mapped onto a passive electrical circuit equivalent what ensures that TLM is provably stable and conservative at a local level. Furthermore, the circuit representation allows a high level of hybridization of TLM with other techniques and lumped circuit models of components and devices. A photonic crystal structure formed by rods (or blocks) of high-permittivity dieletric material embedded in a low-dielectric background medium is simulated as an example. The model developed gives vital spatio-temporal information about the signal, and also gives spectral information over a wide frequency range in a single run. The model has wide applications in microwave communication systems, optical waveguides and electromagnetic materials simulations.

Keywords: Computational Electromagnetics, Numerical Simulation, Transmission Line Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
954 Optimization of Petroleum Refinery Configuration Design with Logic Propositions

Authors: Cheng Seong Khor, Xiao Qi Yeoh

Abstract:

This work concerns the topological optimization problem for determining the optimal petroleum refinery configuration. We are interested in further investigating and hopefully advancing the existing optimization approaches and strategies employing logic propositions to conceptual process synthesis problems. In particular, we seek to contribute to this increasingly exciting area of chemical process modeling by addressing the following potentially important issues: (a) how the formulation of design specifications in a mixed-logical-and-integer optimization model can be employed in a synthesis problem to enrich the problem representation by incorporating past design experience, engineering knowledge, and heuristics; and (b) how structural specifications on the interconnectivity relationships by space (states) and by function (tasks) in a superstructure should be properly formulated within a mixed-integer linear programming (MILP) model. The proposed modeling technique is illustrated on a case study involving the alternative processing routes of naphtha, in which significant improvement in the solution quality is obtained.

Keywords: Mixed-integer linear programming (MILP), petroleum refinery, process synthesis, superstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
953 Creative Teaching of New Product Development to Operations Managers

Authors: Marco Leite, J. M. Vilas-Boas da Silva, Isabel Duarte de Almeida

Abstract:

New Product Development (NPD) has got its roots on an Engineering background. Thus, one might wonder about the interest, opportunity, contents and delivery process, if students from soft sciences were involved. This paper addressed «What to teach?» and «How to do it?», as the preliminary research questions that originated the introduced propositions. The curriculum-developer model that was purposefully chosen to adapt the coursebook by pursuing macro/micro strategies was found significant by an exploratory qualitative case study. Moreover, learning was developed and value created by implementing the institutional curriculum through a creative, hands-on, experiencing, problem-solving, problem-based but organized teamwork approach. Product design of an orange squeezer complying with ill-defined requirements, including drafts, sketches, prototypes, CAD simulations and a business plan, plus a website, written reports and presentations were the deliverables that confirmed an innovative contribution towards research and practice of teaching and learning of engineering subjects to non-specialist operations managers candidates.

Keywords: Teaching Engineering to Non-specialists, Operations Managers Education, Teamwork, Product Design and Development, Market- driven NPD, Curriculum development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2409
952 Gabriel-constrained Parametric Surface Triangulation

Authors: Oscar E. Ruiz, Carlos Cadavid, Juan G. Lalinde, Ricardo Serrano, Guillermo Peris-Fajarnes

Abstract:

The Boundary Representation of a 3D manifold contains FACES (connected subsets of a parametric surface S : R2 -! R3). In many science and engineering applications it is cumbersome and algebraically difficult to deal with the polynomial set and constraints (LOOPs) representing the FACE. Because of this reason, a Piecewise Linear (PL) approximation of the FACE is needed, which is usually represented in terms of triangles (i.e. 2-simplices). Solving the problem of FACE triangulation requires producing quality triangles which are: (i) independent of the arguments of S, (ii) sensitive to the local curvatures, and (iii) compliant with the boundaries of the FACE and (iv) topologically compatible with the triangles of the neighboring FACEs. In the existing literature there are no guarantees for the point (iii). This article contributes to the topic of triangulations conforming to the boundaries of the FACE by applying the concept of parameterindependent Gabriel complex, which improves the correctness of the triangulation regarding aspects (iii) and (iv). In addition, the article applies the geometric concept of tangent ball to a surface at a point to address points (i) and (ii). Additional research is needed in algorithms that (i) take advantage of the concepts presented in the heuristic algorithm proposed and (ii) can be proved correct.

Keywords: surface triangulation, conforming triangulation, surfacesampling, Gabriel complex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
951 Local Spectrum Feature Extraction for Face Recognition

Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd Zaizu Ilyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh

Abstract:

This paper presents two techniques, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapped on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non- Gaussian in the feature space and by using combination of several Gaussian functions that has different statistical properties, the best feature representation can be modelled using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculated GMM components. The method is tested using FERET datasets and is able to achieved 92% recognition rates.

Keywords: Local features modelling, face recognition system, Gaussian mixture models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2251
950 A Formative Assessment Tool for Effective Feedback

Authors: Rami Rashkovits, Ilana Lavy

Abstract:

In this study we present our developed formative assessment tool for students' assignments. The tool enables lecturers to define assignments for the course and assign each problem in each assignment a list of criteria and weights by which the students' work is evaluated. During assessment, the lecturers feed the scores for each criterion with justifications. When the scores of the current assignment are completely fed in, the tool automatically generates reports for both students and lecturers. The students receive a report by email including detailed description of their assessed work, their relative score and their progress across the criteria along the course timeline. This information is presented via charts generated automatically by the tool based on the scores fed in. The lecturers receive a report that includes summative (e.g., averages, standard deviations) and detailed (e.g., histogram) data of the current assignment. This information enables the lecturers to follow the class achievements and adjust the learning process accordingly. The tool was examined on two pilot groups of college students that study a course in (1) Object-Oriented Programming (2) Plane Geometry. Results reveal that most of the students were satisfied with the assessment process and the reports produced by the tool. The lecturers who used the tool were also satisfied with the reports and their contribution to the learning process.

Keywords: Computer-based formative assessment tool, science education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889
949 Association of Sensory Processing and Cognitive Deficits in Children with Autism Spectrum Disorders – Pioneer Study in Saudi Arabia

Authors: Rana M. Zeina, Laila AL-Ayadhi, Shahid Bashir

Abstract:

The association between sensory problems and cognitive abilities has been studied in individuals with Autism Spectrum Disorders (ASDs). In this study, we used a Neuropsychological Test to evaluate memory and attention in ASDs children with sensory problems compared to the ASDs children without sensory problems. Four visual memory tests of Cambridge Neuropsychological Test Automated Battery (CANTAB) including Big/little circle (BLC), Simple Reaction Time (SRT) Intra /Extra dimensional set shift (IED), Spatial recognition memory (SRM), were administered to 14 ASDs children with sensory problems compared to 13 ASDs without sensory problems aged 3 to 12 with IQ of above 70. ASDs individuals with sensory problems performed worse than the ASDs group without sensory problems on comprehension, learning, reversal and simple reaction time tasks, and no significant difference between the two groups was recorded in terms of the visual memory and visual comprehension tasks. The findings of this study suggest that ASDs children with sensory problems are facing deficits in learning, comprehension, reversal, and speed of response to a stimulus.

Keywords: Visual memory, Attention, Autism Spectrum Disorders (ASDs).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2534
948 Alignment between Understanding and Assessment Practice among Secondary School Teachers

Authors: Eftah Bte. Moh @ Hj Abdullah, Izazol Binti Idris, Abd Aziz Bin Abd Shukor

Abstract:

This study aimed to identify the alignment of understanding and assessment practices among secondary school teachers. The study was carried out using quantitative descriptive study. The sample consisted of 164 teachers who taught Form 1 and 2 from 11 secondary schools in the district of North Kinta, Perak, Malaysia. Data were obtained from 164 respondents who answered Expectation Alignment Understanding and Practices of School Assessment (PEKDAPS) questionnaire. The data were analysed using SPSS 17.0+. The Cronbach’s alpha value obtained through PEKDAPS questionnaire pilot study was 0.86. The results showed that teachers' performance in PEKDAPS based on the mean value was less than 3, which means that perfect alignment does not occur between the understanding and practices of school assessment. Two major PEKDAPS sub-constructs of articulation across grade and age and usability of the system were higher than the moderate alignment of the understanding and practices of school assessment (Min=2.0). The content focused of PEKDAPs sub-constructs which showed lower than the moderate alignment of the understanding and practices of school assessment (Min=2.0). Another two PEKDAPS subconstructs of transparency and fairness and the pedagogical implications showed moderate alignment (2.0). The implications of the study is that teachers need to fully understand the importance of alignment among components of assessment, learning and teaching and learning objectives as strategies to achieve quality assessment process.

Keywords: Alignment, assessment practices, School Based Assessment, understanding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001
947 Competitiveness and Value Creation of Tourism Sector: In the Case of 10 ASEAN Economies

Authors: Apirada Chinprateep

Abstract:

The ASEAN Economic Community (AEC) is the goal of regional economic integration by 2015. In the region, tourism is an activity that is important, especially as a source of foreign currency, a source of employment creation and a source of income bringing to the region. Given the complexity of the issues entailing the concept of sustainable tourism, this paper tries to assess tourism sustainability with the ASEAN, based on a number of quantitative indicators for all the ten economies, Thailand, Myanmar, Laos, Vietnam, Malaysia, Singapore, Indonesia, Philippines, Cambodia, and Brunei. The methodological framework will provide a number of benchmarks of tourism activities in these countries. They include identification of the dimensions; for example, economic, socio-ecologic, infrastructure and indicators, method of scaling, chart representation and evaluation on Asian countries. This specification shows that a similar level of tourism activity might introduce different implementation in the tourism activity and might have different consequences for the socioecological environment and sustainability. The heterogeneity of developing countries exposed briefly here would be useful to detect and prepare for coping with the main problems of each country in their tourism activities, as well as competitiveness and value creation of tourism for ASEAN economic community, and will compare with other parts of the world.

Keywords: AEC, ASEAN, sustainable, tourism, competitiveness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
946 Photogrammetry and GIS Integration for Archaeological Documentation of Ahl-Alkahf, Jordan

Authors: Rami Al-Ruzouq, Abdallah Al-Zoubi, Abdel-Rahman Abueladas, Petya Dimitrova

Abstract:

Protection and proper management of archaeological heritage are an essential process of studying and interpreting the generations present and future. Protecting the archaeological heritage is based upon multidiscipline professional collaboration. This study aims to gather data by different sources (Photogrammetry and Geographic Information System (GIS)) integrated for the purpose of documenting one the of significant archeological sites (Ahl-Alkahf, Jordan). 3D modeling deals with the actual image of the features, shapes and texture to represent reality as realistically as possible by using texture. The 3D coordinates that result of the photogrammetric adjustment procedures are used to create 3D-models of the study area. Adding Textures to the 3D-models surfaces gives a 'real world' appearance to the displayed models. GIS system combined all data, including boundary maps, indicating the location of archeological sites, transportation layer, digital elevation model and orthoimages. For realistic representation of the study area, 3D - GIS model prepared, where efficient generation, management and visualization of such special data can be achieved.

Keywords: Archaeology, close range photogrammetry, ortho-photo, 3D-GIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
945 Implementation of State-Space and Super-Element Techniques for the Modeling and Control of Smart Structures with Damping Characteristics

Authors: Nader Ghareeb, R¨udiger Schmidt

Abstract:

Minimizing the weight in flexible structures means reducing material and costs as well. However, these structures could become prone to vibrations. Attenuating these vibrations has become a pivotal engineering problem that shifted the focus of many research endeavors. One technique to do that is to design and implement an active control system. This system is mainly composed of a vibrating structure, a sensor to perceive the vibrations, an actuator to counteract the influence of disturbances, and finally a controller to generate the appropriate control signals. In this work, two different techniques are explored to create two different mathematical models of an active control system. The first model is a finite element model with a reduced number of nodes and it is called a super-element. The second model is in the form of state-space representation, i.e. a set of partial differential equations. The damping coefficients are calculated and incorporated into both models. The effectiveness of these models is demonstrated when the system is excited by its first natural frequency and an active control strategy is developed and implemented to attenuate the resulting vibrations. Results from both modeling techniques are presented and compared.

Keywords: Finite element analysis, super-element, state-space model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
944 Educational and Technological Perspectives in Doraemon - Hope and Dreams in Doraemon’s Gadgets

Authors: Miho Tsukamoto

Abstract:

A Japanese manga character, Doraemon, was made by Fujiko F. Fujio in 1969, was made into animation in 1973. The main character, Doraemon, is a robot cat, and is a well-known Japanese animated character. However, Doraemon is not only regarded as an animation character but it is also used in educational and technological programs in Japan. This paper focuses on the background of Doraemon, educational and technological perspectives on Doraemon, and comparison of the original Japanese animation and the US remade version, and the animator Fujiko’s dreams and hopes for Doraemon will be examined. Since Doraemon has been exported as animation and manga to overseas, perspectives toward Doraemon have changed. For example, changes of stories and characters can been seen in the present Doraemon animation. Not only the overseas TV productions which broadcast Doraemon but also the Japanese production has to consider violence, sexuality, etc. when editing episodes. Because of representation of cultural differences, Japanese animation is thought to contain more violence, discrimination, and sexuality in animation. With responses from overseas, the Japanese production was cautious about the US remade version. They cared about the US Broadcast Standard, and tried to consider US customs and culture in the US remade version. Seeing the difference, acculturation is necessary for exports of animation overseas. Moreover, observing different aspects of Doraemon domestically, Doraemon provides dreams and hopes to children.

Keywords: Animation, Change, Doraemon, Gadgets, Manga, Technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5567
943 Evaluation of a Hybrid Knowledge-Based System Using Fuzzy Approach

Authors: Kamalendu Pal

Abstract:

This paper describes the main features of a knowledge-based system evaluation method. System evaluation is placed in the context of a hybrid legal decision-support system, Advisory Support for Home Settlement in Divorce (ASHSD). Legal knowledge for ASHSD is represented in two forms, as rules and previously decided cases. Besides distinguishing the two different forms of knowledge representation, the paper outlines the actual use of these forms in a computational framework that is designed to generate a plausible solution for a given case, by using rule-based reasoning (RBR) and case-based reasoning (CBR) in an integrated environment. The nature of suitability assessment of a solution has been considered as a multiple criteria decision-making process in ASHAD evaluation. The evaluation was performed by a combination of discussions and questionnaires with different user groups. The answers to questionnaires used in this evaluations method have been measured as a fuzzy linguistic term. The finding suggests that fuzzy linguistic evaluation is practical and meaningful in knowledge-based system development purpose. 

Keywords: Case-based reasoning, decision-support system, fuzzy linguistic term, rule-based reasoning, system evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
942 Detecting Email Forgery using Random Forests and Naïve Bayes Classifiers

Authors: Emad E Abdallah, A.F. Otoom, ArwaSaqer, Ola Abu-Aisheh, Diana Omari, Ghadeer Salem

Abstract:

As emails communications have no consistent authentication procedure to ensure the authenticity, we present an investigation analysis approach for detecting forged emails based on Random Forests and Naïve Bays classifiers. Instead of investigating the email headers, we use the body content to extract a unique writing style for all the possible suspects. Our approach consists of four main steps: (1) The cybercrime investigator extract different effective features including structural, lexical, linguistic, and syntactic evidence from previous emails for all the possible suspects, (2) The extracted features vectors are normalized to increase the accuracy rate. (3) The normalized features are then used to train the learning engine, (4) upon receiving the anonymous email (M); we apply the feature extraction process to produce a feature vector. Finally, using the machine learning classifiers the email is assigned to one of the suspects- whose writing style closely matches M. Experimental results on real data sets show the improved performance of the proposed method and the ability of identifying the authors with a very limited number of features.

Keywords: Digital investigation, cybercrimes, emails forensics, anonymous emails, writing style, and authorship analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5253
941 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540
940 Granularity Analysis for Spatio-Temporal Web Sensors

Authors: Shun Hattori

Abstract:

In recent years, many researches to mine the exploding Web world, especially User Generated Content (UGC) such as weblogs, for knowledge about various phenomena and events in the physical world have been done actively, and also Web services with the Web-mined knowledge have begun to be developed for the public. However, there are few detailed investigations on how accurately Web-mined data reflect physical-world data. It must be problematic to idolatrously utilize the Web-mined data in public Web services without ensuring their accuracy sufficiently. Therefore, this paper introduces the simplest Web Sensor and spatiotemporallynormalized Web Sensor to extract spatiotemporal data about a target phenomenon from weblogs searched by keyword(s) representing the target phenomenon, and tries to validate the potential and reliability of the Web-sensed spatiotemporal data by four kinds of granularity analyses of coefficient correlation with temperature, rainfall, snowfall, and earthquake statistics per day by region of Japan Meteorological Agency as physical-world data: spatial granularity (region-s population density), temporal granularity (time period, e.g., per day vs. per week), representation granularity (e.g., “rain" vs. “heavy rain"), and media granularity (weblogs vs. microblogs such as Tweets).

Keywords: Granularity analysis, knowledge extraction, spatiotemporal data mining, Web credibility, Web mining, Web sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
939 Lean Manufacturing: Systematic Layout Planning Application to an Assembly Line Layout of a Welding Industry

Authors: Fernando Augusto Ullmann Tobe, Moacyr Amaral Domingues, Figueiredo, Stephany Rie Yamamoto Gushiken

Abstract:

The purpose of this paper is to present the process of elaborating the layout of an assembly line of a welding industry using the principles of lean manufacturing as the main driver. The objective of this paper is relevant since the current layout of the assembly line causes non-productive times for operators, being related to the lean waste of unnecessary movements. The methodology used for the project development was Project-based Learning (PBL), which is an active way of learning focused on real problems. The process of selecting the methodology for layout planning was developed considering three criteria to evaluate the most relevant one for this paper's goal. As a result of this evaluation, Systematic Layout Planning was selected, and three steps were added to it – Value Stream Mapping for the current situation and after layout changed and the definition of lean tools and layout type. This inclusion was to consider lean manufacturing in the layout redesign of the industry. The layout change resulted in an increase in the value-adding time of operations carried out in the sector, reduction in movement times between previous and final assemblies, and in cost savings regarding the man-hour value of the employees, which can be invested in productive hours instead of movement times.

Keywords: Assembly line, layout, lean manufacturing, systematic layout planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 823
938 Dynamic Modeling of Underplateform Damper used in Turbomachinery

Authors: Vikas Rastogi, Vipan Kumar, Loveleen Kumar Bhagi

Abstract:

The present work deals with the structural analysis of turbine blades and modeling of turbine blades. A common failure mode for turbine machines is high cycle of fatigue of compressor and turbine blades due to high dynamic stresses caused by blade vibration and resonance within the operation range of the machinery. In this work, proper damping system will be analyzed to reduce the vibrating blade. The main focus of the work is the modeling of under platform damper to evaluate the dynamic analysis of turbine-blade vibrations. The system is analyzed using Bond graph technique. Bond graph is one of the most convenient ways to represent a system from the physical aspect in foreground. It has advantage of putting together multi-energy domains of a system in a single representation in a unified manner. The bond graph model of dry friction damper is simulated on SYMBOLS-shakti® software. In this work, the blades are modeled as Timoshenko beam. Blade Vibrations under different working conditions are being analyzed numerically.

Keywords: Turbine blade vibrations, Friction dampers, Timoshenko Beam, Bond graph modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327
937 Learning Mandarin Chinese as a Foreign Language in a Bilingual Context: Adult Learners’ Perceptions of the Use of L1 Maltese and L2 English in Mandarin Chinese Lessons in Malta

Authors: Christiana Gauci-Sciberras

Abstract:

The first language (L1) could be used in foreign language teaching and learning as a pedagogical tool to scaffold new knowledge in the target language (TL) upon linguistic knowledge that the learner already has. In a bilingual context, code-switching between the two languages usually occurs in classrooms. One of the reasons for code-switching is because both languages are used for scaffolding new knowledge. This research paper aims to find out why both the L1 (Maltese) and the L2 (English) are used in the classroom of Mandarin Chinese as a foreign language (CFL) in the bilingual context of Malta. This research paper also aims to find out the learners’ perceptions of the use of a bilingual medium of instruction. Two research methods were used to collect qualitative data; semi-structured interviews with adult learners of Mandarin Chinese and lesson observations. These two research methods were used so that the data collected in the interviews would be triangulated with data collected in lesson observations. The L1 (Maltese) is the language of instruction mostly used. The teacher and the learners switch to the L2 (English) or to any other foreign language according to the need at a particular instance during the lesson.

Keywords: Chinese, bilingual, pedagogical purpose of L1 and L2, CFL acquisition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
936 Multi-Enterprise Tie and Co-Operation Mechanism in Mexican Agro Industry SME's

Authors: Tania Elena González Alvarado, Ma. Antonieta Martín Granados

Abstract:

The aim of this paper is to explain what a multienterprise tie is, what evidence its analysis provides and how does the cooperation mechanism influence the establishment of a multienterprise tie. The study focuses on businesses of smaller dimension, geographically dispersed and whose businessmen are learning to cooperate in an international environment. The empirical evidence obtained at this moment permits to conclude the following: The tie is not long-lasting, it has an end; opportunism is an opportunity to learn; the multi-enterprise tie is a space to learn about the cooperation mechanism; the local tie permits a businessman to alternate between competition and cooperation strategies; the disappearance of a tie is an experience of learning for a businessman, diminishing the possibility of failure in the next tie; the cooperation mechanism tends to eliminate hierarchical relations; the multienterprise tie diminishes the asymmetries and permits SME-s to have a better position when they negotiate with large companies; the multi-enterprise tie impacts positively on the local system. The collection of empirical evidence was done trough the following instruments: direct observation in a business encounter to which the businesses attended in 2003 (202 Mexican agro industry SME-s), a survey applied in 2004 (129), a questionnaire applied in 2005 (86 businesses), field visits to the businesses during the period 2006-2008 and; a survey applied by telephone in 2008 (55 Mexican agro industry SME-s).

Keywords: Cooperation, multi-enterprise tie, links, networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1272
935 System Identification with General Dynamic Neural Networks and Network Pruning

Authors: Christian Endisch, Christoph Hackl, Dierk Schröder

Abstract:

This paper presents an exact pruning algorithm with adaptive pruning interval for general dynamic neural networks (GDNN). GDNNs are artificial neural networks with internal dynamics. All layers have feedback connections with time delays to the same and to all other layers. The structure of the plant is unknown, so the identification process is started with a larger network architecture than necessary. During parameter optimization with the Levenberg- Marquardt (LM) algorithm irrelevant weights of the dynamic neural network are deleted in order to find a model for the plant as simple as possible. The weights to be pruned are found by direct evaluation of the training data within a sliding time window. The influence of pruning on the identification system depends on the network architecture at pruning time and the selected weight to be deleted. As the architecture of the model is changed drastically during the identification and pruning process, it is suggested to adapt the pruning interval online. Two system identification examples show the architecture selection ability of the proposed pruning approach.

Keywords: System identification, dynamic neural network, recurrentneural network, GDNN, optimization, Levenberg Marquardt, realtime recurrent learning, network pruning, quasi-online learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936