Search results for: Data communication
6873 Using Data Mining in Automotive Safety
Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler
Abstract:
Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.
Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28446872 A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse
Authors: Meng Fanchao, Zhan Dechen, Xu Xiaofei
Abstract:
Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.Keywords: Business component, business operation, business data type, specification matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14096871 Fuzzy Multi-Component DEA with Shared and Undesirable Fuzzy Resources
Authors: Jolly Puri, Shiv Prasad Yadav
Abstract:
Multi-component data envelopment analysis (MC-DEA) is a popular technique for measuring aggregate performance of the decision making units (DMUs) along with their components. However, the conventional MC-DEA is limited to crisp input and output data which may not always be available in exact form. In real life problems, data may be imprecise or fuzzy. Therefore, in this paper, we propose (i) a fuzzy MC-DEA (FMC-DEA) model in which shared and undesirable fuzzy resources are incorporated, (ii) the proposed FMC-DEA model is transformed into a pair of crisp models using α cut approach, (iii) fuzzy aggregate performance of a DMU and fuzzy efficiencies of components are defined to be fuzzy numbers, and (iv) a numerical example is illustrated to validate the proposed approach.
Keywords: Multi-component DEA, fuzzy multi-component DEA, fuzzy resources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20736870 Finding an Optimized Discriminate Function for Internet Application Recognition
Authors: E. Khorram, S.M. Mirzababaei
Abstract:
Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.
Keywords: Data Mining, Firewall, Optimization, Packetclassification, Statistical Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14086869 Adding Edges between One Node and Every Other Node with the Same Depth in a Complete K-ary Tree
Authors: Kiyoshi Sawada, Takashi Mitsuishi
Abstract:
This paper proposes a model of adding relations between members of the same level in a pyramid organization structure which is a complete K-ary tree such that the communication of information between every member in the organization becomes the most efficient. When edges between one node and every other node with the same depth N in a complete K-ary tree of height H are added, an optimal depth N* = H is obtained by minimizing the total path length which is the sum of lengths of shortest paths between every pair of all nodes.Keywords: complete K-ary tree, organization structure, shortest path
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13566868 Semantic Spatial Objects Data Structure for Spatial Access Method
Authors: Kalum Priyanath Udagepola, Zuo Decheng, Wu Zhibo, Yang Xiaozong
Abstract:
Modern spatial database management systems require a unique Spatial Access Method (SAM) in order solve complex spatial quires efficiently. In this case the spatial data structure takes a prominent place in the SAM. Inadequate data structure leads forming poor algorithmic choices and forging deficient understandings of algorithm behavior on the spatial database. A key step in developing a better semantic spatial object data structure is to quantify the performance effects of semantic and outlier detections that are not reflected in the previous tree structures (R-Tree and its variants). This paper explores a novel SSRO-Tree on SAM to the Topo-Semantic approach. The paper shows how to identify and handle the semantic spatial objects with outlier objects during page overflow/underflow, using gain/loss metrics. We introduce a new SSRO-Tree algorithm which facilitates the achievement of better performance in practice over algorithms that are superior in the R*-Tree and RO-Tree by considering selection queries.
Keywords: Outlier, semantic spatial object, spatial objects, SSRO-Tree, topo-semantic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16956867 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7796866 Generalized Method for Estimating Best-Fit Vertical Alignments for Profile Data
Authors: Said M. Easa, Shinya Kikuchi
Abstract:
When the profile information of an existing road is missing or not up-to-date and the parameters of the vertical alignment are needed for engineering analysis, the engineer has to recreate the geometric design features of the road alignment using collected profile data. The profile data may be collected using traditional surveying methods, global positioning systems, or digital imagery. This paper develops a method that estimates the parameters of the geometric features that best characterize the existing vertical alignments in terms of tangents and the expressions of the curve, that may be symmetrical, asymmetrical, reverse, and complex vertical curves. The method is implemented using an Excel-based optimization method that minimizes the differences between the observed profile and the profiles estimated from the equations of the vertical curve. The method uses a 'wireframe' representation of the profile that makes the proposed method applicable to all types of vertical curves. A secondary contribution of this paper is to introduce the properties of the equal-arc asymmetrical curve that has been recently developed in the highway geometric design field.Keywords: Optimization, parameters, data, reverse, spreadsheet, vertical curves
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24486865 Direction to Manage OTOP Entrepreneurship Based on Local Wisdom
Authors: Witthaya Mekhum
Abstract:
The OTOP Entrepreneurship that used to create substantial source of income for local Thai communities are now in a stage of exigent matters that required assistances from public sectors due to over Entrepreneurship of duplicative ideas, unable to adjust costs and prices, lack of innovation, and inadequate of quality control. Moreover, there is a repetitive problem of middlemen who constantly corner the OTOP market. Local OTOP producers become easy preys since they do not know how to add more values, how to create and maintain their own brand name, and how to create proper packaging and labeling. The suggested solutions to local OTOP producers are to adopt modern management techniques, to find knowhow to add more values to products and to unravel other marketing problems. The objectives of this research are to study the prevalent OTOP products management and to discover direction to manage OTOP products to enhance the effectiveness of OTOP Entrepreneurship in Nonthaburi Province, Thailand. There were 113 participants in this study. The research tools can be divided into two parts: First part is done by questionnaire to find responses of the prevalent OTOP Entrepreneurship management. Second part is the use of focus group which is conducted to encapsulate ideas and local wisdom. Data analysis is performed by using frequency, percentage, mean, and standard deviation as well as the synthesis of several small group discussions. The findings reveal that 1) Business Resources: the quality of product is most important and the marketing of product is least important. 2) Business Management: Leadership is most important and raw material planning is least important. 3) Business Readiness: Communication is most important and packaging is least important. 4) Support from public sector: Certified from the government is most important and source of raw material is the least important.Keywords: Management, OTOP Entrepreneurship, Local Wisdom
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19416864 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7996863 Health Assessment of Electronic Products using Mahalanobis Distance and Projection Pursuit Analysis
Authors: Sachin Kumar, Vasilis Sotiris, Michael Pecht
Abstract:
With increasing complexity in electronic systems there is a need for system level anomaly detection and fault isolation. Anomaly detection based on vector similarity to a training set is used in this paper through two approaches, one the preserves the original information, Mahalanobis Distance (MD), and the other that compresses the data into its principal components, Projection Pursuit Analysis. These methods have been used to detect deviations in system performance from normal operation and for critical parameter isolation in multivariate environments. The study evaluates the detection capability of each approach on a set of test data with known faults against a baseline set of data representative of such “healthy" systems.Keywords: Mahalanobis distance, Principle components, Projection pursuit, Health assessment, Anomaly.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16816862 Impacts of Building Design Factors on Auckland School Energy Consumptions
Authors: Bin Su
Abstract:
This study focuses on the impact of school building design factors on winter extra energy consumption which mainly includes space heating, water heating and other appliances related to winter indoor thermal conditions. A number of Auckland schools were randomly selected for the study which introduces a method of using real monthly energy consumption data for a year to calculate winter extra energy data of school buildings. The study seeks to identify the relationships between winter extra energy data related to school building design data related to the main architectural features, building envelope and elements of the sample schools. The relationships can be used to estimate the approximate saving in winter extra energy consumption which would result from a changed design datum for future school development, and identify any major energy-efficient design problems. The relationships are also valuable for developing passive design guides for school energy efficiency.
Keywords: Building energy efficiency, Building thermal design, Building thermal performance, School building design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19456861 Sleep Scheduling Schemes Based on Location of Mobile User in Sensor-Cloud
Authors: N. Mahendran, R. Priya
Abstract:
The mobile cloud computing (MCC) with wireless sensor networks (WSNs) technology gets more attraction by research scholars because its combines the sensors data gathering ability with the cloud data processing capacity. This approach overcomes the limitation of data storage capacity and computational ability of sensor nodes. Finally, the stored data are sent to the mobile users when the user sends the request. The most of the integrated sensor-cloud schemes fail to observe the following criteria: 1) The mobile users request the specific data to the cloud based on their present location. 2) Power consumption since most of them are equipped with non-rechargeable batteries. Mostly, the sensors are deployed in hazardous and remote areas. This paper focuses on above observations and introduces an approach known as collaborative location-based sleep scheduling (CLSS) scheme. Both awake and asleep status of each sensor node is dynamically devised by schedulers and the scheduling is done purely based on the of mobile users’ current location; in this manner, large amount of energy consumption is minimized at WSN. CLSS work depends on two different methods; CLSS1 scheme provides lower energy consumption and CLSS2 provides the scalability and robustness of the integrated WSN.
Keywords: Sleep scheduling, mobile cloud computing, wireless sensor network, integration, location, network lifetime.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9766860 Multiple Targets Classification and Fuzzy Logic Decision Fusion in Wireless Sensor Networks
Authors: Ahmad Aljaafreh
Abstract:
This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.
Keywords: Classification, decision fusion, fuzzy logic, hidden Markov model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 62496859 Impact of Safety and Quality Considerations of Housing Clients on the Construction Firms’ Intention to Adopt Quality Function Deployment: A Case of Construction Sector
Authors: Saif Ul Haq
Abstract:
The current study intends to examine the safety and quality considerations of clients of housing projects and their impact on the adoption of Quality Function Deployment (QFD) by the construction firm. Mixed method research technique has been used to collect and analyze the data wherein a survey was conducted to collect the data from 220 clients of housing projects in Saudi Arabia. Then, the telephonic and Skype interviews were conducted to collect data of 15 professionals working in the top ten real estate companies of Saudi Arabia. Data were analyzed by using partial least square (PLS) and thematic analysis techniques. Findings reveal that today’s customer prioritizes the safety and quality requirements of their houses and as a result, construction firms adopt QFD to address the needs of customers. The findings are of great importance for the clients of housing projects as well as for the construction firms as they could apply QFD in housing projects to address the safety and quality concerns of their clients.Keywords: Construction industry, quality considerations, quality function deployment, safety considerations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8996858 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection
Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi
Abstract:
It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.
Keywords: Ensemble, hybrid, filter-wrapper, phishing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796857 Tourism Satellite Account: Approach and Information System Development
Authors: Pappas Theodoros, Michael Diakomichalis
Abstract:
Measuring the economic impact of tourism in a benchmark economy is a global concern, with previous measurements being partial and not fully integrated. Tourism is a phenomenon that requires individual consumption of visitors, and which should be observed and measured to reveal the overall contribution of tourism to an economy. The Tourism Satellite Account (TSA) is a critical tool for assessing the annual growth of tourism, providing reliable measurements. This article presents a system of TSA information that encompasses all functions TSA functions, including input, storage, management, and analysis of data, as well as additional future functions and enhances the efficiency of tourism data management and TSA collection utility. The methodology and results presented offer new insights for the development and implementation of TSA.
Keywords: Tourism Satellite Account, information system, data-based tourist account.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606856 Input Data Balancing in a Neural Network PM-10 Forecasting System
Authors: Suk-Hyun Yu, Heeyong Kwon
Abstract:
Recently PM-10 has become a social and global issue. It is one of major air pollutants which affect human health. Therefore, it needs to be forecasted rapidly and precisely. However, PM-10 comes from various emission sources, and its level of concentration is largely dependent on meteorological and geographical factors of local and global region, so the forecasting of PM-10 concentration is very difficult. Neural network model can be used in the case. But, there are few cases of high concentration PM-10. It makes the learning of the neural network model difficult. In this paper, we suggest a simple input balancing method when the data distribution is uneven. It is based on the probability of appearance of the data. Experimental results show that the input balancing makes the neural networks’ learning easy and improves the forecasting rates.
Keywords: AI, air quality prediction, neural networks, pattern recognition, PM-10.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8266855 Tree Based Data Fusion Clustering Routing Algorithm for Illimitable Network Administration in Wireless Sensor Network
Authors: Y. Harold Robinson, M. Rajaram, E. Golden Julie, S. Balaji
Abstract:
In wireless sensor networks, locality and positioning information can be captured using Global Positioning System (GPS). This message can be congregated initially from spot to identify the system. Users can retrieve information of interest from a wireless sensor network (WSN) by injecting queries and gathering results from the mobile sink nodes. Routing is the progression of choosing optimal path in a mobile network. Intermediate node employs permutation of device nodes into teams and generating cluster heads that gather the data from entity cluster’s node and encourage the collective data to base station. WSNs are widely used for gathering data. Since sensors are power-constrained devices, it is quite vital for them to reduce the power utilization. A tree-based data fusion clustering routing algorithm (TBDFC) is used to reduce energy consumption in wireless device networks. Here, the nodes in a tree use the cluster formation, whereas the elevation of the tree is decided based on the distance of the member nodes to the cluster-head. Network simulation shows that this scheme improves the power utilization by the nodes, and thus considerably improves the lifetime.
Keywords: WSN, TBDFC, LEACH, PEGASIS, TREEPSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11166854 A Mixed Method Investigation of the Impact of Practicum Experience on Mathematics Female Pre-Service Teachers’ Sense of Preparedness
Authors: Fatimah Alsaleh, Glenda Anthony
Abstract:
The practicum experience is a critical component of any initial teacher education (ITE) course. As well as providing a near authentic setting for pre-service teachers (PSTs) to practice in, it also plays a key role in shaping their perceptions and sense of preparedness. Nevertheless, merely including a practicum period as a compulsory part of ITE may not in itself be enough to induce feelings of preparedness and efficacy; the quality of the classroom experience must also be considered. Drawing on findings of a larger study of secondary and intermediate level mathematics PSTs’ sense of preparedness to teach, this paper examines the influence of the practicum experience in particular. The study sample comprised female mathematics PSTs who had almost completed their teaching methods course in their fourth year of ITE across 16 teacher education programs in Saudi Arabia. The impact of the practicum experience on PSTs’ sense of preparedness was investigated via a mixed-methods approach combining a survey (N = 105) and in-depth interviews with survey volunteers (N = 16). Statistical analysis in SPSS was used to explore the quantitative data, and thematic analysis was applied to the qualitative interviews data. The results revealed that the PSTs perceived the practicum experience to have played a dominant role in shaping their feelings of preparedness and efficacy. However, despite the generally positive influence of practicum, the PSTs also reported numerous challenges that lessened their feelings of preparedness. These challenges were often related to the classroom environment and the school culture. For example, about half of the PSTs indicated that the practicum schools did not have the resources available or the support necessary to help them learn the work of teaching. In particular, the PSTs expressed concerns about translating the theoretical knowledge learned at the university into practice in authentic classrooms. These challenges engendered PSTs feeling less prepared and suggest that more support from both the university and the school is needed to help PSTs develop a stronger sense of preparedness. The area in which PSTs felt least prepared was that of classroom and behavior management, although the results also indicated that PSTs only felt a moderate level of general teaching efficacy and were less confident about how to support students as learners. Again, feelings of lower efficacy were related to the dissonance between the theory presented at university and real-world classroom practice. In order to close this gap between theory and practice, PSTs expressed the wish to have more time in the practicum, and more accountability for support from school-based mentors. In highlighting the challenges of the practicum in shaping PSTs’ sense of preparedness and efficacy, the study argues that better communication between the ITE providers and the practicum schools is necessary in order to maximize the benefit of the practicum experience.
Keywords: Mathematics, practicum experience, pre-service teachers, sense of preparedness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11246853 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis
Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic
Abstract:
What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.Keywords: Political tendency, prediction, sentiment analysis, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8496852 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results
Authors: C. Villegas-Quezada, J. Climent
Abstract:
Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14976851 Application of Exact String Matching Algorithms towards SMILES Representation of Chemical Structure
Authors: Ahmad Fadel Klaib, Zurinahni Zainol, Nurul Hashimah Ahamed, Rosma Ahmad, Wahidah Hussin
Abstract:
Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.
Keywords: Exact String-matching Algorithms, NMRShiftDB, SMILES Format, Antimicrobial Structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22246850 Intrusion Detection based on Distance Combination
Authors: Joffroy Beauquier, Yongjie Hu
Abstract:
The intrusion detection problem has been frequently studied, but intrusion detection methods are often based on a single point of view, which always limits the results. In this paper, we introduce a new intrusion detection model based on the combination of different current methods. First we use a notion of distance to unify the different methods. Second we combine these methods using the Pearson correlation coefficients, which measure the relationship between two methods, and we obtain a combined distance. If the combined distance is greater than a predetermined threshold, an intrusion is detected. We have implemented and tested the combination model with two different public data sets: the data set of masquerade detection collected by Schonlau & al., and the data set of program behaviors from the University of New Mexico. The results of the experiments prove that the combination model has better performances.
Keywords: Intrusion detection, combination, distance, Pearson correlation coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18426849 Fault Tolerance in Distributed Database Systems
Authors: M. A. Adeboyejo, O. O. Adeosun
Abstract:
Pioneer networked systems assume that connections are reliable, and a faulty operation will be considered in case of losing a connection. Transient connections are typical of mobile devices. Areas of application of data sharing system such as these, lead to the conclusion that network connections may not always be reliable, and that the conventional approaches can be improved. Nigerian commercial banking industry is a critical system whose operation is increasingly becoming dependent on information technology (IT) driven information system. The proposed solution to this problem makes use of a hierarchically clustered network structure which we selected to reflect (as much as possible) the typical organizational structure of the Nigerian commercial banks. Representative transactions such as data updates and replication of the results of such updates were used to simulate the proposed model to show its applicability.
Keywords: Dependability, reliability, data redundancy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33576848 Normalization Discriminant Independent Component Analysis
Authors: Liew Yee Ping, Pang Ying Han, Lau Siong Hoe, Ooi Shih Yin, Housam Khalifa Bashier Babiker
Abstract:
In face recognition, feature extraction techniques attempts to search for appropriate representation of the data. However, when the feature dimension is larger than the samples size, it brings performance degradation. Hence, we propose a method called Normalization Discriminant Independent Component Analysis (NDICA). The input data will be regularized to obtain the most reliable features from the data and processed using Independent Component Analysis (ICA). The proposed method is evaluated on three face databases, Olivetti Research Ltd (ORL), Face Recognition Technology (FERET) and Face Recognition Grand Challenge (FRGC). NDICA showed it effectiveness compared with other unsupervised and supervised techniques.
Keywords: Face recognition, small sample size, regularization, independent component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19546847 Daily Global Solar Radiation Modeling Using Multi-Layer Perceptron (MLP) Neural Networks
Authors: Seyed Fazel Ziaei Asl, Ali Karami, Gholamreza Ashari, Azam Behrang, Arezoo Assareh, N.Hedayat
Abstract:
Predict daily global solar radiation (GSR) based on meteorological variables, using Multi-layer perceptron (MLP) neural networks is the main objective of this study. Daily mean air temperature, relative humidity, sunshine hours, evaporation, wind speed, and soil temperature values between 2002 and 2006 for Dezful city in Iran (32° 16' N, 48° 25' E), are used in this study. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data.
Keywords: Multi-layer Perceptron (MLP) Neural Networks;Global Solar Radiation (GSR), Meteorological Parameters, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29836846 The Effect of CPU Location in Total Immersion of Microelectronics
Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson
Abstract:
Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.
Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21746845 Low Power Circuit Architecture of AES Crypto Module for Wireless Sensor Network
Authors: MooSeop Kim, Juhan Kim, Yongje Choi
Abstract:
Recently, much research has been conducted for security for wireless sensor networks and ubiquitous computing. Security issues such as authentication and data integrity are major requirements to construct sensor network systems. Advanced Encryption Standard (AES) is considered as one of candidate algorithms for data encryption in wireless sensor networks. In this paper, we will present the hardware architecture to implement low power AES crypto module. Our low power AES crypto module has optimized architecture of data encryption unit and key schedule unit which could be applicable to wireless sensor networks. We also details low power design methods used to design our low power AES crypto module.Keywords: Algorithm, Low Power Crypto Circuit, AES, Security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25156844 PoPCoRN: A Power-Aware Periodic Surveillance Scheme in Convex Region using Wireless Mobile Sensor Networks
Authors: A. K. Prajapati
Abstract:
In this paper, the periodic surveillance scheme has been proposed for any convex region using mobile wireless sensor nodes. A sensor network typically consists of fixed number of sensor nodes which report the measurements of sensed data such as temperature, pressure, humidity, etc., of its immediate proximity (the area within its sensing range). For the purpose of sensing an area of interest, there are adequate number of fixed sensor nodes required to cover the entire region of interest. It implies that the number of fixed sensor nodes required to cover a given area will depend on the sensing range of the sensor as well as deployment strategies employed. It is assumed that the sensors to be mobile within the region of surveillance, can be mounted on moving bodies like robots or vehicle. Therefore, in our scheme, the surveillance time period determines the number of sensor nodes required to be deployed in the region of interest. The proposed scheme comprises of three algorithms namely: Hexagonalization, Clustering, and Scheduling, The first algorithm partitions the coverage area into fixed sized hexagons that approximate the sensing range (cell) of individual sensor node. The clustering algorithm groups the cells into clusters, each of which will be covered by a single sensor node. The later determines a schedule for each sensor to serve its respective cluster. Each sensor node traverses all the cells belonging to the cluster assigned to it by oscillating between the first and the last cell for the duration of its life time. Simulation results show that our scheme provides full coverage within a given period of time using few sensors with minimum movement, less power consumption, and relatively less infrastructure cost.Keywords: Sensor Network, Graph Theory, MSN, Communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464