Search results for: data exchange.
7340 Investigating Student Behavior in Adopting Online Formative Assessment Feedback
Authors: Peter Clutterbuck, Terry Rowlands, Owen Seamons
Abstract:
In this paper we describe one critical research program within a complex, ongoing multi-year project (2010 to 2014 inclusive) with the overall goal to improve the learning outcomes for first year undergraduate commerce/business students within an Information Systems (IS) subject with very large enrolment. The single research program described in this paper is the analysis of student attitudes and decision making in relation to the availability of formative assessment feedback via Web-based real time conferencing and document exchange software (Adobe Connect). The formative assessment feedback between teaching staff and students is in respect of an authentic problem-based, team-completed assignment. The analysis of student attitudes and decision making is investigated via both qualitative (firstly) and quantitative (secondly) application of the Theory of Planned Behavior (TPB) with a two statistically-significant and separate trial samples of the enrolled students. The initial qualitative TPB investigation revealed that perceived self-efficacy, improved time-management, and lecturer-student relationship building were the major factors in shaping an overall favorable student attitude to online feedback, whilst some students expressed valid concerns with perceived control limitations identified within the online feedback protocols. The subsequent quantitative TPB investigation then confirmed that attitude towards usage, subjective norms surrounding usage, and perceived behavioral control of usage were all significant in shaping student intention to use the online feedback protocol, with these three variables explaining 63 percent of the variance in the behavioral intention to use the online feedback protocol. The identification in this research of perceived behavioral control as a significant determinant in student usage of a specific technology component within a virtual learning environment (VLE) suggests that VLEs could now be viewed not as a single, atomic entity, but as a spectrum of technology offerings ranging from the mature and simple (e.g., email, Web downloads) to the cutting-edge and challenging (e.g., Web conferencing and real-time document exchange). That is, that all VLEs should not be considered the same. The results of this research suggest that tertiary students have the technological sophistication to assess a VLE in this more selective manner.
Keywords: Formative assessment feedback, virtual learning environment, theory of planned behavior, perceived behavioral control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20897339 Mobile Phone as a Tool for Data Collection in Field Research
Authors: Sandro Mourão, Karla Okada
Abstract:
The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.Keywords: Data Gathering, Field Research, Mobile Phone, Survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20587338 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.
Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17407337 Multivariate Assessment of Mathematics Test Scores of Students in Qatar
Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski
Abstract:
Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.
Keywords: Cluster analysis, education, mathematics, profiles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8927336 DIVAD: A Dynamic and Interactive Visual Analytical Dashboard for Exploring and Analyzing Transport Data
Authors: Tin Seong Kam, Ketan Barshikar, Shaun Tan
Abstract:
The advances in location-based data collection technologies such as GPS, RFID etc. and the rapid reduction of their costs provide us with a huge and continuously increasing amount of data about movement of vehicles, people and goods in an urban area. This explosive growth of geospatially-referenced data has far outpaced the planner-s ability to utilize and transform the data into insightful information thus creating an adverse impact on the return on the investment made to collect and manage this data. Addressing this pressing need, we designed and developed DIVAD, a dynamic and interactive visual analytics dashboard to allow city planners to explore and analyze city-s transportation data to gain valuable insights about city-s traffic flow and transportation requirements. We demonstrate the potential of DIVAD through the use of interactive choropleth and hexagon binning maps to explore and analyze large taxi-transportation data of Singapore for different geographic and time zones.Keywords: Geographic Information System (GIS), MovementData, GeoVisual Analytics, Urban Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23897335 Gene Expression Data Classification Using Discriminatively Regularized Sparse Subspace Learning
Authors: Chunming Xu
Abstract:
Sparse representation which can represent high dimensional data effectively has been successfully used in computer vision and pattern recognition problems. However, it doesn-t consider the label information of data samples. To overcome this limitation, we develop a novel dimensionality reduction algorithm namely dscriminatively regularized sparse subspace learning(DR-SSL) in this paper. The proposed DR-SSL algorithm can not only make use of the sparse representation to model the data, but also can effective employ the label information to guide the procedure of dimensionality reduction. In addition,the presented algorithm can effectively deal with the out-of-sample problem.The experiments on gene-expression data sets show that the proposed algorithm is an effective tool for dimensionality reduction and gene-expression data classification.Keywords: sparse representation, dimensionality reduction, labelinformation, sparse subspace learning, gene-expression data classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14477334 Determining Cluster Boundaries Using Particle Swarm Optimization
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.
Keywords: Particle swarm optimization, self-organizing maps, clustering, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17187333 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.
Keywords: Predictive analysis, big data, predictive analysis algorithms. CART algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10757332 Analysis of Supply Side Factors Affecting Bank Financing of Non-Oil Exports in Nigeria
Authors: Sama’ila Idi Ningi, Abubakar Yusuf Dutse
Abstract:
The banking sector poses a lot of problems in Nigeria in general and the non-oil export sector in particular. The banks' lack effectiveness in handling small, medium or long-term credit risk (lack of training of loan officers, lack of information on borrowers and absence of a reliable credit registry) results in non-oil exporters being burdened with high requirements, such as up to three years of financial statements, enough collateral to cover both the loan principal and interest (including a cash deposit that may be up to 30% of the loans' net present value), and to provide every detail of the international trade transaction in question. The stated problems triggered this research. Consequently, information on bank financing of non-oil exports was collected from 100 respondents from the 20 Deposit Money Banks (DMBs) in Nigeria. The data was analysed by the use of descriptive statistics correlation and regression. It is found that, Nigerian banks are participants in the financing of non-oil exports. Despite their participation, the rate of interest for credit extended to non-oil export is usually high, ranging between 15-20%. Small and medium sized non-oil export businesses lack the credit history for banks to judge them as reputable. Banks also consider the non-oil export sector very risky for investment. The banks actually do grant less credit than the exporters may require and therefore are not properly funded by banks. Banks grant very low volume of foreign currency loan in addition to, unfavorable exchange rate at which Naira is exchanged to the Dollar and other currencies in the country. This makes importation of inputs costly and negatively impacted on the non-oil export performance in Nigeria.
Keywords: Supply Side Factors, Bank Financing, Non-Oil Exports.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27117331 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.
Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3717330 Feedback-Controlled Server for Scheduling Aperiodic Tasks
Authors: Shinpei Kato, Nobuyuki Yamasaki
Abstract:
This paper proposes a scheduling scheme using feedback control to reduce the response time of aperiodic tasks with soft real-time constraints. We design an algorithm based on the proposed scheduling scheme and Total Bandwidth Server (TBS) that is a conventional server technique for scheduling aperiodic tasks. We then describe the feedback controller of the algorithm and give the control parameter tuning methods. The simulation study demonstrates that the algorithm can reduce the mean response time up to 26% compared to TBS in exchange for slight deadline misses.Keywords: Real-Time Systems, Aperiodic Task Scheduling, Feedback-Control Scheduling, Total Bandwidth Server.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17247329 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach
Authors: Sarisa Pinkham, Kanyarat Bussaban
Abstract:
The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.
Keywords: Daily rainfall, Image processing, Approximation, Pixel value data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17587328 Automatic Generation of Ontology from Data Source Directed by Meta Models
Authors: Widad Jakjoud, Mohamed Bahaj, Jamal Bakkas
Abstract:
Through this paper we present a method for automatic generation of ontological model from any data source using Model Driven Architecture (MDA), this generation is dedicated to the cooperation of the knowledge engineering and software engineering. Indeed, reverse engineering of a data source generates a software model (schema of data) that will undergo transformations to generate the ontological model. This method uses the meta-models to validate software and ontological models.
Keywords: Meta model, model, ontology, data source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19987327 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods
Authors: C. Kalamani, K. Paramasivam
Abstract:
In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19217326 A Hybrid Data Mining Method for the Medical Classification of Chest Pain
Authors: Sung Ho Ha, Seong Hyeon Joo
Abstract:
Data mining techniques have been used in medical research for many years and have been known to be effective. In order to solve such problems as long-waiting time, congestion, and delayed patient care, faced by emergency departments, this study concentrates on building a hybrid methodology, combining data mining techniques such as association rules and classification trees. The methodology is applied to real-world emergency data collected from a hospital and is evaluated by comparing with other techniques. The methodology is expected to help physicians to make a faster and more accurate classification of chest pain diseases.Keywords: Data mining, medical decisions, medical domainknowledge, chest pain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22207325 Dynamic Programming Based Algorithm for the Unit Commitment of the Transmission-Constrained Multi-Site Combined Heat and Power System
Authors: A. Rong, P. B. Luh, R. Lahdelma
Abstract:
High penetration of intermittent renewable energy sources (RES) such as solar power and wind power into the energy system has caused temporal and spatial imbalance between electric power supply and demand for some countries and regions. This brings about the critical need for coordinating power production and power exchange for different regions. As compared with the power-only systems, the combined heat and power (CHP) systems can provide additional flexibility of utilizing RES by exploiting the interdependence of power and heat production in the CHP plant. In the CHP system, power production can be influenced by adjusting heat production level and electric power can be used to satisfy heat demand by electric boiler or heat pump in conjunction with heat storage, which is much cheaper than electric storage. This paper addresses multi-site CHP systems without considering RES, which lay foundation for handling penetration of RES. The problem under study is the unit commitment (UC) of the transmission-constrained multi-site CHP systems. We solve the problem by combining linear relaxation of ON/OFF states and sequential dynamic programming (DP) techniques, where relaxed states are used to reduce the dimension of the UC problem and DP for improving the solution quality. Numerical results for daily scheduling with realistic models and data show that DP-based algorithm is from a few to a few hundred times faster than CPLEX (standard commercial optimization software) with good solution accuracy (less than 1% relative gap from the optimal solution on the average).Keywords: Dynamic programming, multi-site combined heat and power system, relaxed states, transmission-constrained generation unit commitment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16857324 Knowledge Discovery and Data Mining Techniques in Textile Industry
Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.Keywords: Data mining, textile production, decision trees, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15387323 Application and Limitation of Parallel Modelingin Multidimensional Sequential Pattern
Authors: Mahdi Esmaeili, Mansour Tarafdar
Abstract:
The goal of data mining algorithms is to discover useful information embedded in large databases. One of the most important data mining problems is discovery of frequently occurring patterns in sequential data. In a multidimensional sequence each event depends on more than one dimension. The search space is quite large and the serial algorithms are not scalable for very large datasets. To address this, it is necessary to study scalable parallel implementations of sequence mining algorithms. In this paper, we present a model for multidimensional sequence and describe a parallel algorithm based on data parallelism. Simulation experiments show good load balancing and scalable and acceptable speedup over different processors and problem sizes and demonstrate that our approach can works efficiently in a real parallel computing environment.Keywords: Sequential Patterns, Data Mining, ParallelAlgorithm, Multidimensional Sequence Data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14767322 Generator of Hypotheses an Approach of Data Mining Based on Monotone Systems Theory
Authors: Rein Kuusik, Grete Lind
Abstract:
Generator of hypotheses is a new method for data mining. It makes possible to classify the source data automatically and produces a particular enumeration of patterns. Pattern is an expression (in a certain language) describing facts in a subset of facts. The goal is to describe the source data via patterns and/or IF...THEN rules. Used evaluation criteria are deterministic (not probabilistic). The search results are trees - form that is easy to comprehend and interpret. Generator of hypotheses uses very effective algorithm based on the theory of monotone systems (MS) named MONSA (MONotone System Algorithm).Keywords: data mining, monotone systems, pattern, rule.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12567321 Categorical Data Modeling: Logistic Regression Software
Authors: Abdellatif Tchantchane
Abstract:
A Matlab based software for logistic regression is developed to enhance the process of teaching quantitative topics and assist researchers with analyzing wide area of applications where categorical data is involved. The software offers an option of performing stepwise logistic regression to select the most significant predictors. The software includes a feature to detect influential observations in data, and investigates the effect of dropping or misclassifying an observation on a predictor variable. The input data may consist either as a set of individual responses (yes/no) with the predictor variables or as grouped records summarizing various categories for each unique set of predictor variables' values. Graphical displays are used to output various statistical results and to assess the goodness of fit of the logistic regression model. The software recognizes possible convergence constraints when present in data, and the user is notified accordingly.
Keywords: Logistic regression, Matlab, Categorical data, Influential observation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18827320 Role of Association Rule Mining in Numerical Data Analysis
Authors: Sudhir Jagtap, Kodge B. G., Shinde G. N., Devshette P. M
Abstract:
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. The numerical data analysis became key process in research and development of all the fields [6]. In this paper we have made an attempt to analyze the specified numerical patterns with reference to the association rule mining techniques with minimum confidence and minimum support mining criteria. The extracted rules and analyzed results are graphically demonstrated. Association rules are a simple but very useful form of data mining that describe the probabilistic co-occurrence of certain events within a database [7]. They were originally designed to analyze market-basket data, in which the likelihood of items being purchased together within the same transactions are analyzed.Keywords: Numerical data analysis, Data Mining, Association Rule Mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28617319 Nonstationarity Modeling of Economic and Financial Time Series
Authors: C. Slim
Abstract:
Traditional techniques for analyzing time series are based on the notion of stationarity of phenomena under study, but in reality most economic and financial series do not verify this hypothesis, which implies the implementation of specific tools for the detection of such behavior. In this paper, we study nonstationary non-seasonal time series tests in a non-exhaustive manner. We formalize the problem of nonstationary processes with numerical simulations and take stock of their statistical characteristics. The theoretical aspects of some of the most common unit root tests will be discussed. We detail the specification of the tests, showing the advantages and disadvantages of each. The empirical study focuses on the application of these tests to the exchange rate (USD/TND) and the Consumer Price Index (CPI) in Tunisia, in order to compare the Power of these tests with the characteristics of the series.Keywords: Stationarity, unit root tests, economic time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8647318 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14917317 Correction of Infrared Data for Electrical Components on a Board
Authors: Seong-Ho Song, Ki-Seob Kim, Seop-Hyeong Park, Seon-Woo Lee
Abstract:
In this paper, the data correction algorithm is suggested when the environmental air temperature varies. To correct the infrared data in this paper, the initial temperature or the initial infrared image data is used so that a target source system may not be necessary. The temperature data obtained from infrared detector show nonlinear property depending on the surface temperature. In order to handle this nonlinear property, Taylor series approach is adopted. It is shown that the proposed algorithm can reduce the influence of environmental temperature on the components in the board. The main advantage of this algorithm is to use only the initial temperature of the components on the board rather than using other reference device such as black body sources in order to get reference temperatures.Keywords: Infrared camera, Temperature Data compensation, Environmental Ambient Temperature, Electric Component
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15277316 A Generalised Relational Data Model
Authors: Georgia Garani
Abstract:
A generalised relational data model is formalised for the representation of data with nested structure of arbitrary depth. A recursive algebra for the proposed model is presented. All the operations are formally defined. The proposed model is proved to be a superset of the conventional relational model (CRM). The functionality and validity of the model is shown by a prototype implementation that has been undertaken in the functional programming language Miranda.Keywords: nested relations, recursive algebra, recursive nested operations, relational data model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15597315 WiFi Data Offloading: Bundling Method in a Canvas Business Model
Authors: Majid Mokhtarnia, Alireza Amini
Abstract:
Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.
Keywords: Bundling, canvas business model, telecommunication, WiFi Data Offloading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8907314 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Moses Noel Dogonyaro
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.
Keywords: Data Analytics, Security, Privacy, Bootstrapping, and Fully Homomorphic Encryption Scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34587313 Relationship between Financial Reporting Transparency and Investment Efficiency: Evidence from Iran
Authors: Bita Mashayekhi, Hamid Kalhornia
Abstract:
One of the most important roles of financial reporting is improving the firms’ investment decisions; however, there is not much supporting evidence for this claim in emerging markets like Iran. In this study, the effect of financial reporting transparency in investment efficiency of Iranian firms has been investigated. In order to do this, 336 listed companies on Tehran Stock Exchange (TSE) has been selected for time period 2012 to 2015 as research sample. For testing our main hypothesis, we classified sample firms into two groups based on their deviation from expected investment: under-investment and over-investment cases. The results indicate that there is positive significant relationship between financial transparency and investment efficiency. In the other words, transparency can mitigate both underinvestment and overinvestment situations.Keywords: Corporate governance, disclosure, investment decisions, investment efficiency, transparency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17417312 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions
Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang
Abstract:
Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.
Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7747311 Hierarchical Clustering Algorithms in Data Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the area in data mining and it can be classified into partition, hierarchical, density based and grid based. Therefore, in this paper we do survey and review four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems as well as deriving more robust and scalable algorithms for clustering.Keywords: Clustering, method, algorithm, hierarchical, survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3376