Search results for: Data integration
6809 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data
Authors: N. Borjalilu, P. Rabiei, A. Enjoo
Abstract:
Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.Keywords: F-TOPSIS, fuzzy set, FDM, flight safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8866808 Energy Consumption and Economic Growth in South Asian Countries: A Co-integrated Panel Analysis
Authors: S. Noor, M. W. Siddiqi
Abstract:
This study examines causal link between energy use and economic growth for five South Asian countries over period 1971-2006. Panel cointegration, ECM and FMOLS are applied for short and long run estimates. In short run unidirectional causality from per capita GDP to per capita energy consumption is found, but not vice versa. In long run one percent increase in per capita energy consumption tend to decrease 0.13 percent per capita GDP. i.e. Energy use discourage economic growth. This short and long run relationship indicate energy shortage crisis in South Asia due to increased energy use coupled with insufficient energy supply. Beside this long run estimated coefficient of error term suggest that short term adjustment to equilibrium are driven by adjustment back to long run equilibrium. Moreover, per capita energy consumption is responsive to adjustment back to equilibrium and it takes 59 years approximately. It specifies long run feedback between both variables.
Keywords: Energy consumption, Income, Panel co-integration, Causality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33126807 Time Series Forecasting Using a Hybrid RBF Neural Network and AR Model Based On Binomial Smoothing
Authors: Fengxia Zheng, Shouming Zhong
Abstract:
ANNARIMA that combines both autoregressive integrated moving average (ARIMA) model and artificial neural network (ANN) model is a valuable tool for modeling and forecasting nonlinear time series, yet the over-fitting problem is more likely to occur in neural network models. This paper provides a hybrid methodology that combines both radial basis function (RBF) neural network and auto regression (AR) model based on binomial smoothing (BS) technique which is efficient in data processing, which is called BSRBFAR. This method is examined by using the data of Canadian Lynx data. Empirical results indicate that the over-fitting problem can be eased using RBF neural network based on binomial smoothing which is called BS-RBF, and the hybrid model–BS-RBFAR can be an effective way to improve forecasting accuracy achieved by BSRBF used separately.
Keywords: Binomial smoothing (BS), hybrid, Canadian Lynx data, forecasting accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36856806 Unsupervised Texture Classification and Segmentation
Authors: V.P.Subramanyam Rallabandi, S.K.Sett
Abstract:
An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent non-Gaussian densities. The algorithm estimates the data density in each class by using parametric nonlinear functions that fit to the non-Gaussian structure of the data. This improves classification accuracy compared with standard Gaussian mixture models. When applied to textures, the algorithm can learn basis functions for images that capture the statistically significant structure intrinsic in the images. We apply this technique to the problem of unsupervised texture classification and segmentation.Keywords: Gaussian Mixture Model, Independent Component Analysis, Segmentation, Unsupervised Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15906805 Implementation of an Undergraduate Integrated Biology and Chemistry Course
Authors: Jayson G. Balansag
Abstract:
An integrated biology and chemistry (iBC) course for freshmen college students was developed in University of Delaware. This course will prepare students to (1) become interdisciplinary thinkers in the field of biology and (2) collaboratively work with others from multiple disciplines in the future. This paper documents and describes the implementation of the course. The information gathered from reading literature, classroom observations, and interviews were used to carry out the purpose of this paper. The major goal of the iBC course is to align the concepts between Biology and Chemistry, so that students can draw science concepts from both disciplines which they can apply in their interdisciplinary researches. This course is offered every fall and spring semesters of each school year. Students enrolled in Biology are also enrolled in Chemistry during the same semester. The iBC is composed of lectures, laboratories, studio sessions, and workshops and is taught by the faculty from the biology and chemistry departments. In addition, the preceptors, graduate teaching assistants, and studio fellows facilitate the laboratory and studio sessions. These roles are interdependent with each other. The iBC can be used as a model for higher education institutions who wish to implement an integrated biology course.
Keywords: Integrated biology and chemistry, integration, interdisciplinary research, new biology, undergraduate science education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12196804 Identifying and Adopting Latter Instruments Determining the Sustainable Company Competitiveness
Authors: Andrej Miklošík, Petra Horváthová, Štefan Žák
Abstract:
Nowadays companies in all sectors are looking for the sources of competitive advantages. Holistic marketing approach searches for their emergence based on the integration of all components and elements across the organization. Modern marketing sees the sources of competitive advantage in implementing the latest managerial practices, motivation, intelligent project management, knowledge management, collaborative marketing, CSR and, in the recent years, also in the business process optimization. With the use of modern tools including business process management and business process modelling the company can markedly increase its internal efficiency which can lead not only to lowering the costs but to creating the environment for optimal customer care, positive corporate culture and for origination of innovations as well. In the article the authors analyze the recent trend in this area and introduce suggestions to companies to identify and optimize the key processes that have a significant impact of the company´s competitiveness.Keywords: business process optimization, competitive advantage, corporate social responsibility, knowledge management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17346803 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning
Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan
Abstract:
We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.Keywords: Daily activity recognition, healthcare, IoT sensors, transfer learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8916802 Visualization of Sediment Thickness Variation for Sea Bed Logging using Spline Interpolation
Authors: Hanita Daud, Noorhana Yahya, Vijanth Sagayan, Muizuddin Talib
Abstract:
This paper discusses on the use of Spline Interpolation and Mean Square Error (MSE) as tools to process data acquired from the developed simulator that shall replicate sea bed logging environment. Sea bed logging (SBL) is a new technique that uses marine controlled source electromagnetic (CSEM) sounding technique and is proven to be very successful in detecting and characterizing hydrocarbon reservoirs in deep water area by using resistivity contrasts. It uses very low frequency of 0.1Hz to 10 Hz to obtain greater wavelength. In this work the in house built simulator was used and was provided with predefined parameters and the transmitted frequency was varied for sediment thickness of 1000m to 4000m for environment with and without hydrocarbon. From series of simulations, synthetics data were generated. These data were interpolated using Spline interpolation technique (degree of three) and mean square error (MSE) were calculated between original data and interpolated data. Comparisons were made by studying the trends and relationship between frequency and sediment thickness based on the MSE calculated. It was found that the MSE was on increasing trends in the set up that has the presence of hydrocarbon in the setting than the one without. The MSE was also on decreasing trends as sediment thickness was increased and with higher transmitted frequency.Keywords: Spline Interpolation, Mean Square Error, Sea Bed Logging, Controlled Source Electromagnetic
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16556801 The Consumer Private Space: What is and How it can be Approached without Affecting the Consumer's Privacy
Authors: Calin Veghes
Abstract:
The concept of privacy, seen in connection to the consumer's private space and personalization, has recently gained a higher importance as a consequence of the increasing marketing efforts of the organizations based on the capturing, processing and usage of consumer-s personal data.Paper intends to provide a definition of the consumer-s private space based on the types of personal data the consumer is willing to disclose, to assess the attitude toward personalization and to identify the means preferred by consumers to control their personal data and defend their private space. Several implications generated through the definition of the consumer-s private space are identified and weighted from both the consumers- and organizations- perspectives.
Keywords: Consumer private space, personalization, privacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15656800 Quantifying the Methods of Monitoring Timers in Electric Water Heater for Grid Balancing on Demand Side Management: A Systematic Mapping Review
Authors: Yamamah Abdulrazaq, Lahieb A. Abrahim, Samuel E. Davies, Iain Shewring
Abstract:
Electric water heater (EWH) is a powerful appliance that uses electricity in residential, commercial, and industrial settings, and the ability to control them properly will result in cost savings and the prevention of blackouts on the national grid. This article discusses the usage of timers in EWH control strategies for demand-side management (DSM). To the authors' knowledge, there is no systematic mapping review focusing on the utilization of EWH control strategies in DSM has yet been conducted. Consequently, the purpose of this research is to identify and examine main papers exploring EWH procedures in DSM by quantifying and categorizing information with regard to publication year and source, kind of methods, and source of data for monitoring control techniques. In order to answer the research questions, a total of 31 publications published between 1999 and 2023 were selected depending on specific inclusion and exclusion criteria. The data indicate that direct load control (DLC) has been somewhat more prevalent than indirect load control (ILC). Additionally, the mix method is much lower than the other techniques, and the proportion of real-time data (RTD) to non-real-time data (NRTD) is about equal.
Keywords: Demand side management, direct load control, electric water heater, indirect load control, non-real-time data, real time data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1116799 Enabling Remote Desktop in a Virtualized Environment for Cloud Services
Authors: Shuen-Tai Wang, Yu-Ching Lin, Hsi-Ya Chang
Abstract:
Cloud computing is the innovative and leading information technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort. This paper presents our development on enabling an individual user's desktop in a virtualized environment, which is stored on a remote virtual machine rather than locally. We present the initial work on the integration of virtual desktop and application sharing with virtualization technology. Given the development of remote desktop virtualization, this proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. Users no longer need to burden the cost of software licenses and platform maintenances. Moreover, this development also helps boost user productivity by promoting a flexible model that lets users access their desktop environments from virtually anywhere.
Keywords: Cloud Computing, Virtualization, Virtual Desktop, Elastic Environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22056798 Floating Offshore Wind: A Review of Installation Vessel Requirements
Authors: A. P. Crowle
Abstract:
Floating offshore wind farms may provide in the future large quantities of renewable energy. One of the challenges to their future development is the provision of installation vessels for the offshore installation of floating wind turbines. This paper examines the current fleet of vessels that can be used for inshore construction. Separate vessels are required for the ocean tow out and the offshore installation. Information will be provided on what new vessels might be required to improve the efficiency and reduce costs of installing floating wind turbines. Specialized cargo vessels are required for this initial mobilization. Anchor handling vessels are required to tow the floating wind turbine offshore and to install and connect the moorings. Subsea work vessels are required to install the dynamic cables whilst cable lay vessels are required for the export power cable. This paper reviews the existing and future installation vessel requirement for floating wind. Dedicated ports are required for vertical integration of the substructure and the tower, nacelle and blades.
Keywords: Floating wind, naval architecture, offshore installation vessels, ports for renewable energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446797 VaR Forecasting in Times of Increased Volatility
Authors: Ivo Jánský, Milan Rippel
Abstract:
The paper evaluates several hundred one-day-ahead VaR forecasting models in the time period between the years 2004 and 2009 on data from six world stock indices - DJI, GSPC, IXIC, FTSE, GDAXI and N225. The models model mean using the ARMA processes with up to two lags and variance with one of GARCH, EGARCH or TARCH processes with up to two lags. The models are estimated on the data from the in-sample period and their forecasting accuracy is evaluated on the out-of-sample data, which are more volatile. The main aim of the paper is to test whether a model estimated on data with lower volatility can be used in periods with higher volatility. The evaluation is based on the conditional coverage test and is performed on each stock index separately. The primary result of the paper is that the volatility is best modelled using a GARCH process and that an ARMA process pattern cannot be found in analyzed time series.Keywords: VaR, risk analysis, conditional volatility, garch, egarch, tarch, moving average process, autoregressive process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14276796 A Weighted Sum Technique for the Joint Optimization of Performance and Power Consumption in Data Centers
Authors: Samee Ullah Khan, C.Ardil
Abstract:
With data centers, end-users can realize the pervasiveness of services that will be one day the cornerstone of our lives. However, data centers are often classified as computing systems that consume the most amounts of power. To circumvent such a problem, we propose a self-adaptive weighted sum methodology that jointly optimizes the performance and power consumption of any given data center. Compared to traditional methodologies for multi-objective optimization problems, the proposed self-adaptive weighted sum technique does not rely on a systematical change of weights during the optimization procedure. The proposed technique is compared with the greedy and LR heuristics for large-scale problems, and the optimal solution for small-scale problems implemented in LINDO. the experimental results revealed that the proposed selfadaptive weighted sum technique outperforms both of the heuristics and projects a competitive performance compared to the optimal solution.Keywords: Meta-heuristics, distributed systems, adaptive methods, resource allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18346795 An Integrated Model of Urban Conservation and Revitalization from the Point of Immigration and Its Effects on Reyhan Urban Site in Turkey as a Case Study
Authors: Ozlem Koprulu Bagbanci, M.Bilal Bagbanci
Abstract:
This paper presents the effects of migration at the urban sites with an integrated model under the sustainable local development policies for the conservation and revitalization of the site areas as a case at Reyhan heritage site in Bursa. It is known as the “City of immigrants" because of its richness of cultural plurality. The city has always regarded the dynamic impact of immigration as a positive contribution. As a result of this situation, the city created the earliest urbanization practices: being the first capital city of the Ottoman Empire. Bursa created the first modern movement practices and set the first Organized Industrial Zone. The most important aim of the study is to be offer a model for the similar areas with the context of conservation and revitalization of the historical areas, subjected to the local integrated sustainable development policies of local goverments.Keywords: integration, migration, local policies, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17376794 Image-Based (RBG) Technique for Estimating Phosphorus Levels of Crops
Authors: M. M. Ali, Ahmed Al-Ani, Derek Eamus, Daniel K. Y. Tan
Abstract:
In this glasshouse study, we developed a new imagebased non-destructive technique for detecting leaf P status of different crops such as cotton, tomato and lettuce. The plants were grown on a nutrient solution containing different P concentrations, e.g. 0%, 50% and 100% of recommended P concentration (P0 = no P, L; P1 = 2.5 mL 10 L-1 of P and P2 = 5 mL 10 L-1 of P). After 7 weeks of treatment, the plants were harvested and data on leaf P contents were collected using the standard destructive laboratory method and at the same time leaf images were collected by a handheld crop image sensor. We calculated leaf area, leaf perimeter and RGB (red, green and blue) values of these images. These data were further used in linear discriminant analysis (LDA) to estimate leaf P contents, which successfully classified these plants on the basis of leaf P contents. The data indicated that P deficiency in crop plants can be predicted using leaf image and morphological data. Our proposed nondestructive imaging method is precise in estimating P requirements of different crop species.Keywords: Image-based techniques, leaf area, leaf P contents, linear discriminant analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16486793 An Improved K-Means Algorithm for Gene Expression Data Clustering
Authors: Billel Kenidra, Mohamed Benmohammed
Abstract:
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.
Keywords: Microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12836792 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20076791 Analysis on the Development and Evolution of China's Territorial Spatial Planning
Authors: He YuanYan
Abstract:
In recent years, China has implemented the reform of land and space planning. As an important public policy, land and space planning plays a vital role in the construction and development of cities. Land and space planning throughout the country is in full swing, but there are still many disputes from all walks of life. The content, scope, and specific implementation process of land and space planning are also ambiguous, leading to the integration of multiple regulation problems such as unclear authority, unclear responsibilities, and poor planning results during the implementation of land and space planning. Therefore, it is necessary to sort out the development and evolution of domestic and foreign land space planning, clarify the problems and cruxes from the current situation of China's land space planning, and sort out the obstacles and countermeasures to the implementation of this policy, so as to deepen the understanding of the connotation of land space planning. It is of great practical significance for all planners to correctly understand and clarify the specific contents and methods of land space planning and to smoothly promote the implementation of land space planning at all levels.
Keywords: Territorial spatial planning, public policy, land space, overall planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2296790 Localizing and Experiencing Electronic Questionnaires in an Educational Web Site
Authors: Theodore H. Kaskalis
Abstract:
One of the main research methods in humanistic studies is the collection and process of data through questionnaires. This paper reports our experiences of localizing and adapting the phpESP package of electronic surveys, which led to a friendly on-line questionnaire environment offered through our department web site. After presenting the characteristics of this environment, we identify the expected benefits and present a questionnaire carried out through both the traditional and electronic way. We present the respondents' feedback and then we report the researchers' opinions.Finally, we propose ideas we intend to implement in order to further assist and enhance the research based on this web accessed,electronic questionnaire environment.
Keywords: Electronic questionnaires, Computer assisted webinterviewing, Survey data collection, Survey data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12856789 Fuzzy Control of a Quarter-Car Suspension System
Authors: M. M. M. Salem, Ayman A. Aly
Abstract:
An active suspension system has been proposed to improve the ride comfort. A quarter-car 2 degree-of-freedom (DOF) system is designed and constructed on the basis of the concept of a four-wheel independent suspension to simulate the actions of an active vehicle suspension system. The purpose of a suspension system is to support the vehicle body and increase ride comfort. The aim of the work described in the paper was to illustrate the application of fuzzy logic technique to the control of a continuously damping automotive suspension system. The ride comfort is improved by means of the reduction of the body acceleration caused by the car body when road disturbances from smooth road and real road roughness. The paper describes also the model and controller used in the study and discusses the vehicle response results obtained from a range of road input simulations. In the conclusion, a comparison of active suspension fuzzy control and Proportional Integration derivative (PID) control is shown using MATLAB simulations.Keywords: Fuzzy logic control, ride comfort, vehicle dynamics, active suspension system, quarter-car model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42046788 Design and Implementation of Security Middleware for Data Warehouse Signature Framework
Authors: Mayada AlMeghari
Abstract:
Recently, grid middlewares have provided large integrated use of network resources as the shared data and the CPU to become a virtual supercomputer. In this work, we present the design and implementation of the middleware for Data Warehouse Signature (DWS) Framework. The aim of using the middleware in the proposed DWS framework is to achieve the high performance by the parallel computing. This middleware is developed on Alchemi.Net framework to increase the security among the network nodes through the authentication and group-key distribution model. This model achieves the key security and prevents any intermediate attacks in the middleware. This paper presents the flow process structures of the middleware design. In addition, the paper ensures the implementation of security for DWS middleware enhancement with the authentication and group-key distribution model. Finally, from the analysis of other middleware approaches, the developed middleware of DWS framework is the optimal solution of a complete covering of security issues.
Keywords: Middleware, parallel computing, data warehouse, security, group-key, high performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3366787 Using Serious Games to Improve the Preparation of Pre-Service Teachers in Bulgaria
Authors: Rumyana Peytcheva-Forsyth, Blagovesna Yovkova
Abstract:
This paper presents the outcomes of a qualitative study which aims to investigate the pedagogical potentials of serious games in the preparation of future teachers. The authors discuss the existing problems and barriers associated with the organization of teaching practices in Bulgaria as part of the pre-service teacher training, as well as the attitudes and perceptions of the interviewed academics, teachers and trainees concerning the integration of serious games in the teaching practicum. The study outcomes strongly confirm the positive attitudes of the respondents to the introduction of virtual learning environments for the development of professional skills of future teachers as a supplement to the traditional forms of education. Through the inclusion of serious games it is expected to improve the quality of practical training of pre-service teachers as they overcome many of the problems identified in the existing teaching practices. The outcomes of the study will inform the design of the educational simulation software which is part of the project SimAula Tomorrow's Teachers Training.Keywords: pre-service teacher training, serious games, virtual practicum, simulations
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16716786 Re-Optimization MVPP Using Common Subexpression for Materialized View Selection
Authors: Boontita Suchyukorn, Raweewan Auepanwiriyakul
Abstract:
A Data Warehouses is a repository of information integrated from source data. Information stored in data warehouse is the form of materialized in order to provide the better performance for answering the queries. Deciding which appropriated views to be materialized is one of important problem. In order to achieve this requirement, the constructing search space close to optimal is a necessary task. It will provide effective result for selecting view to be materialized. In this paper we have proposed an approach to reoptimize Multiple View Processing Plan (MVPP) by using global common subexpressions. The merged queries which have query processing cost not close to optimal would be rewritten. The experiment shows that our approach can help to improve the total query processing cost of MVPP and sum of query processing cost and materialized view maintenance cost is reduced as well after views are selected to be materialized.
Keywords: Data Warehouse, materialized views, query rewriting, common subexpressions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16776785 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.
Keywords: Clustering, k-means, categorical datasets, pattern recognition, unsupervised learning, knowledge discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35446784 Early-Warning Lights Classification Management System for Industrial Parks in Taiwan
Authors: Yu-Min Chang, Kuo-Sheng Tsai, Hung-Te Tsai, Chia-Hsin Li
Abstract:
This paper presents the early-warning lights classification management system for industrial parks promoted by the Taiwan Environmental Protection Administration (EPA) since 2011, including the definition of each early-warning light, objectives, action program and accomplishments. All of the 151 industrial parks in Taiwan were classified into four early-warning lights, including red, orange, yellow and green, for carrying out respective pollution management according to the monitoring data of soil and groundwater quality, regulatory compliance, and regulatory listing of control site or remediation site. The Taiwan EPA set up a priority list for high potential polluted industrial parks and investigated their soil and groundwater qualities based on the results of the light classification and pollution potential assessment. In 2011-2013, there were 44 industrial parks selected and carried out different investigation, such as the early warning groundwater well networks establishment and pollution investigation/verification for the red and orange-light industrial parks and the environmental background survey for the yellow-light industrial parks. Among them, 22 industrial parks were newly or continuously confirmed that the concentrations of pollutants exceeded those in soil or groundwater pollution control standards. Thus, the further investigation, groundwater use restriction, listing of pollution control site or remediation site, and pollutant isolation measures were implemented by the local environmental protection and industry competent authorities; the early warning lights of those industrial parks were proposed to adjust up to orange or red-light. Up to the present, the preliminary positive effect of the soil and groundwater quality management system for industrial parks has been noticed in several aspects, such as environmental background information collection, early warning of pollution risk, pollution investigation and control, information integration and application, and inter-agency collaboration. Finally, the work and goal of self-initiated quality management of industrial parks will be carried out on the basis of the inter-agency collaboration by the classified lights system of early warning and management as well as the regular announcement of the status of each industrial park.
Keywords: Industrial park, soil and groundwater quality management, early-warning lights classification, SOP for reporting and treatment of monitored abnormal events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19896783 A New Direct Updating Method for Undamped Structural Systems
Authors: Yongxin Yuan, Jiashang Jiang
Abstract:
A new numerical method for simultaneously updating mass and stiffness matrices based on incomplete modal measured data is presented. By using the Kronecker product, all the variables that are to be modified can be found out and then can be updated directly. The optimal approximation mass matrix and stiffness matrix which satisfy the required eigenvalue equation and orthogonality condition are found under the Frobenius norm sense. The physical configuration of the analytical model is preserved and the updated model will exactly reproduce the modal measured data. The numerical example seems to indicate that the method is quite accurate and efficient.
Keywords: Finite element model, model updating, modal data, optimal approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14796782 Positioning a Southern Inclusive Framework Embedded in the Social Model of Disability Theory Contextualized for Guyana
Authors: Lidon Lashley
Abstract:
This paper presents how the social model of disability can be used to reshape inclusive education practices in Guyana. Inclusive education in Guyana is metamorphosizing but still firmly held in the tenets of the Medical Model of Disability which influences the experiences of children with Special Education Needs and/or Disabilities (SEN/D). An ethnographic approach to data gathering was employed in this study. Qualitative data were gathered from the voices of children with and without SEN/D as well as their mainstream teachers to present the interplay of discourses and subjectivities in the situation. The data were analyzed using Adele Clarke's situational analysis. The data suggest that it is possible but will be challenging to fully contextualize and adopt Loreman's synthesis and Booths and Ainscow's Index in the two mainstream schools studied. In addition, the data paved the way for the presentation of the 'Southern Inclusive Education Framework for Guyana' and its support tool 'The Inclusive Checker created for Southern mainstream primary classrooms'.
Keywords: Social Model of Disability, Medical Model of Disability, subjectivities, metamorphosis, special education needs, postcolonial Guyana, Quasi-inclusion practices, Guyanese cultural challenges, mainstream primary schools, Loreman's Synthesis, Booths and Ainscow's Index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6316781 Using Genetic Programming to Evolve a Team of Data Classifiers
Authors: Gregor A. Morrison, Dominic P. Searson, Mark J. Willis
Abstract:
The purpose of this paper is to demonstrate the ability of a genetic programming (GP) algorithm to evolve a team of data classification models. The GP algorithm used in this work is “multigene" in nature, i.e. there are multiple tree structures (genes) that are used to represent team members. Each team member assigns a data sample to one of a fixed set of output classes. A majority vote, determined using the mode (highest occurrence) of classes predicted by the individual genes, is used to determine the final class prediction. The algorithm is tested on a binary classification problem. For the case study investigated, compact classification models are obtained with comparable accuracy to alternative approaches.Keywords: classification, genetic programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17816780 COTT – A Testability Framework for Object-Oriented Software Testing
Authors: A. Goel, S.C. Gupta, S.K.Wasan
Abstract:
Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Keywords: Controllability, Observability, Test Coverage and Testing Tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620