Search results for: Large Data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8997

Search results for: Large Data

8667 Metadata Update Mechanism Improvements in Data Grid

Authors: S. Farokhzad, M. Reza Salehnamadi

Abstract:

Grid environments include aggregation of geographical distributed resources. Grid is put forward in three types of computational, data and storage. This paper presents a research on data grid. Data grid is used for covering and securing accessibility to data from among many heterogeneous sources. Users are not worry on the place where data is located in it, provided that, they should get access to the data. Metadata is used for getting access to data in data grid. Presently, application metadata catalogue and SRB middle-ware package are used in data grids for management of metadata. At this paper, possibility of updating, streamlining and searching is provided simultaneously and rapidly through classified table of preserving metadata and conversion of each table to numerous tables. Meanwhile, with regard to the specific application, the most appropriate and best division is set and determined. Concurrency of implementation of some of requests and execution of pipeline is adaptability as a result of this technique.

Keywords: Grids, data grid, metadata, update.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
8666 End To End Process to Automate Batch Application

Authors: Nagmani Lnu

Abstract:

Often, quality engineering refers to testing the applications that either have a User Interface (UI) or an Application Programming Interface (API). We often find mature test practices, standards, and automation regarding UI or API testing. However, another kind is present in almost all types of industries that deal with data in bulk and often get handled through something called a batch application. This is primarily an offline application companies develop to process large data sets that often deal with multiple business rules. The challenge gets more prominent when we try to automate batch testing. This paper describes the approaches taken to test a batch application from a financial industry to test the payment settlement process (a critical use case in all kinds of FinTech companies), resulting in 100% test automation in test creation and test execution. One can follow this approach for any other batch use cases to achieve a higher efficiency in their testing process.

Keywords: Batch testing, batch test automation, batch test strategy, payments testing, payments settlement testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64
8665 Graph Codes-2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally de-signed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, espe-cially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelisation. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph code, graph algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 443
8664 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers

Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice

Abstract:

In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.

Keywords: Churn prediction, data mining, decision-theoretic rough set, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
8663 Fiber Lens Structure for Large Distance Measurement

Authors: Jaemyoung Lee

Abstract:

We propose a new fiber lens structure for large distance measurement in which a polymer layer is added to a conventional fiber lens. The proposed fiber lens can adjust the working distance by properly choosing the refractive index and thickness of the polymer layer. In our numerical analysis for the fiber lens radius of 120 μm, the working distance of the proposed fiber lens is about 10 mm which is about 30 times larger than conventional fiber lens.

Keywords: fiber lens, distance measurement, collimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
8662 A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves

Authors: Hanan Ahmed-Hosni Mahmoud, Nadia Al-Ghreimil

Abstract:

In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.

Keywords: Auxiliary storage sorting, in-place sorting, sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
8661 An Evolutionary Algorithm for Optimal Fuel-Type Configurations in Car Lines

Authors: Charalampos Saridakis, Stelios Tsafarakis

Abstract:

Although environmental concern is on the rise across Europe, current market data indicate that adoption rates of environmentally friendly vehicles remain extremely low. Against this background, the aim of this paper is to a) assess preferences of European consumers for clean-fuel cars and their characteristics and b) design car lines that optimize the combination of fuel types among models in the line-up. In this direction, the authors introduce a new evolutionary mechanism and implement it to stated-preference data derived from a large-scale choice-based conjoint experiment that measures consumer preferences for various factors affecting clean-fuel vehicle (CFV) adoption. The proposed two-step methodology provides interesting insights into how new and existing fuel-types can be combined in a car line that maximizes customer satisfaction.

Keywords: Clean-fuel vehicles, product line design, conjoint analysis, choice experiment, differential evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 999
8660 Development of a Pipeline Monitoring System by Bio-mimetic Robots

Authors: Seung You Na, Daejung Shin, Jin Young Kim, Joo Hyun Jung, Yong-Gwan Won

Abstract:

To explore pipelines is one of various bio-mimetic robot applications. The robot may work in common buildings such as between ceilings and ducts, in addition to complicated and massive pipeline systems of large industrial plants. The bio-mimetic robot finds any troubled area or malfunction and then reports its data. Importantly, it can not only prepare for but also react to any abnormal routes in the pipeline. The pipeline monitoring tasks require special types of mobile robots. For an effective movement along a pipeline, the movement of the robot will be similar to that of insects or crawling animals. During its movement along the pipelines, a pipeline monitoring robot has an important task of finding the shapes of the approaching path on the pipes. In this paper we propose an effective solution to the pipeline pattern recognition, based on the fuzzy classification rules for the measured IR distance data.

Keywords: Bio-mimetic robots, Plant pipes monitoring, Pipepattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1649
8659 A Comparison of Bias Among Relaxed Divisor Methods Using 3 Bias Measurements

Authors: Sumachaya Harnsukworapanich, Tetsuo Ichimori

Abstract:

The apportionment method is used by many countries, to calculate the distribution of seats in political bodies. For example, this method is used in the United States (U.S.) to distribute house seats proportionally based on the population of the electoral district. Famous apportionment methods include the divisor methods called the Adams Method, Dean Method, Hill Method, Jefferson Method and Webster Method. Sometimes the results from the implementation of these divisor methods are unfair and include errors. Therefore, it is important to examine the optimization of this method by using a bias measurement to figure out precise and fair results. In this research we investigate the bias of divisor methods in the U.S. Houses of Representatives toward large and small states by applying the Stolarsky Mean Method. We compare the bias of the apportionment method by using two famous bias measurements: the Balinski and Young measurement and the Ernst measurement. Both measurements have a formula for large and small states. The Third measurement however, which was created by the researchers, did not factor in the element of large and small states into the formula. All three measurements are compared and the results show that our measurement produces similar results to the other two famous measurements.

Keywords: Apportionment, Bias, Divisor, Fair, Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
8658 The Sizes of Large Hierarchical Long-Range Percolation Clusters

Authors: Yilun Shang

Abstract:

We study a long-range percolation model in the hierarchical lattice ΩN of order N where probability of connection between two nodes separated by distance k is of the form min{αβ−k, 1}, α ≥ 0 and β > 0. The parameter α is the percolation parameter, while β describes the long-range nature of the model. The ΩN is an example of so called ultrametric space, which has remarkable qualitative difference between Euclidean-type lattices. In this paper, we characterize the sizes of large clusters for this model along the line of some prior work. The proof involves a stationary embedding of ΩN into Z. The phase diagram of this long-range percolation is well understood.

Keywords: percolation, component, hierarchical lattice, phase transition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271
8657 Dynamic Clustering using Particle Swarm Optimization with Application in Unsupervised Image Classification

Authors: Mahamed G.H. Omran, Andries P Engelbrecht, Ayed Salman

Abstract:

A new dynamic clustering approach (DCPSO), based on Particle Swarm Optimization, is proposed. This approach is applied to unsupervised image classification. The proposed approach automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference. The algorithm starts by partitioning the data set into a relatively large number of clusters to reduce the effects of initial conditions. Using binary particle swarm optimization the "best" number of clusters is selected. The centers of the chosen clusters is then refined via the Kmeans clustering algorithm. The experiments conducted show that the proposed approach generally found the "optimum" number of clusters on the tested images.

Keywords: Clustering Validation, Particle Swarm Optimization, Unsupervised Clustering, Unsupervised Image Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454
8656 Using Data Clustering in Oral Medicine

Authors: Fahad Shahbaz Khan, Rao Muhammad Anwer, Olof Torgersson

Abstract:

The vast amount of information hidden in huge databases has created tremendous interests in the field of data mining. This paper examines the possibility of using data clustering techniques in oral medicine to identify functional relationships between different attributes and classification of similar patient examinations. Commonly used data clustering algorithms have been reviewed and as a result several interesting results have been gathered.

Keywords: Oral Medicine, Cluto, Data Clustering, Data Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977
8655 The Analysis of the Software Industry in Thailand

Authors: Danuvasin Charoen

Abstract:

The software industry has been considered a critical infrastructure for any nation. Several studies have indicated that national competitiveness increasingly depends upon Information and Communication Technology (ICT), and software is one of the major components of ICT, important for both large and small enterprises. Even though there has been strong growth in the software industry in Thailand, the industry has faced many challenges and problems that need to be resolved. For example, the amount of pirated software has been rising, and Thailand still has a large gap in the digital divide. Additionally, the adoption among SMEs has been slow. This paper investigates various issues in the software industry in Thailand, using information acquired through analysis of secondary sources, observation, and focus groups. The results of this study can be used as “lessons learned" for the development of the software industry in any developing country.

Keywords: Software industry, developing nations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4474
8654 Up Scaling of Highly Transparent Quasi-Solid State, Dye-Sensitized Solar Devices Composed of Nanocomposite Materials

Authors: Dimitra Sygkridou, Andreas Rapsomanikis, Elias Stathatos, Polycarpos Falaras, Evangelos Vitoratos

Abstract:

At the present work, highly transparent strip type quasi-solid state dye-sensitized solar cells (DSSCs) were fabricated through inkjet printing using nanocomposite TiO2 inks as raw materials and tested under outdoor illumination conditions. The cells, which can be considered as the structural units of large area modules, were fully characterized electrically and electrochemically and after the evaluation of the received results a large area DSSC module was manufactured. The module design was a sandwich Z-interconnection where the working electrode is deposited on one conductive glass and the counter electrode on a second glass. Silver current collective fingers were printed on the conductive glasses to make the internal electrical connections and the adjacent cells were connected in series and finally insulated using a UV curing resin to protect them from the corrosive (I-/I3-) redox couple of the electrolyte. Finally, outdoor tests were carried out to the fabricated dye-sensitized solar module and its performance data were collected and assessed.

Keywords: Dye-sensitized solar devices, inkjet printing, quasi-solid state electrolyte, transparency, up scaling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
8653 A new Cellular Automata Model of Cardiac Action Potential Propagation based on Summation of Excited Neighbors

Authors: F. Pourhasanzade, S. H. Sabzpoushan

Abstract:

The heart tissue is an excitable media. A Cellular Automata is a type of model that can be used to model cardiac action potential propagation. One of the advantages of this approach against the methods based on differential equations is its high speed in large scale simulations. Recent cellular automata models are not able to avoid flat edges in the result patterns or have large neighborhoods. In this paper, we present a new model to eliminate flat edges by minimum number of neighbors.

Keywords: Cellular Automata, Action Potential Simulation, Isotropic Pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1949
8652 A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)

Authors: Rekha Kandwal, Kamal K.Bharadwaj

Abstract:

Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.

Keywords: Censored production rules, cumulative learning, data mining, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
8651 Investigating Solar Cycles and Media Sentiment Through Advanced NLP Techniques

Authors: Aghamusa Azizov

Abstract:

This study investigates the correlation between solar activity and sentiment in news media coverage, using a large-scale dataset of solar activity since 1750 and over 15 million articles from "The New York Times" dating from 1851 onwards. Employing Pearson's correlation coefficient and multiple Natural Language Processing (NLP) tools—TextBlob, Vader, and DistillBERT—the research examines the extent to which fluctuations in solar phenomena are reflected in the sentiment of historical news narratives. The findings reveal that the correlation between solar activity and media sentiment is generally negligible, suggesting a weak influence of solar patterns on the portrayal of events in news media. Notably, a moderate positive correlation was observed between the sentiments derived from TextBlob and Vader, indicating consistency across NLP tools. The analysis provides insights into the historical impact of solar activity on human affairs and highlights the importance of using multiple analytical methods to understand complex relationships in large datasets. The study contributes to the broader understanding of how extraterrestrial factors may intersect with media-reported events and underlines the intricate nature of interdisciplinary research in the data science and historical domains.

Keywords: Solar Activity Correlation, Media Sentiment Analysis, Natural Language Processing, NLP, Historical Event Patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71
8650 The Application of Data Mining Technology in Building Energy Consumption Data Analysis

Authors: Liang Zhao, Jili Zhang, Chongquan Zhong

Abstract:

Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.

Keywords: Data mining, data analysis, prediction, optimization, building operational performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3709
8649 Cluster Algorithm for Genetic Diversity

Authors: Manpreet Singh, Keerat Kaur, Bhavdeep Singh

Abstract:

With the hardware technology advancing, the cost of storing is decreasing. Thus there is an urgent need for new techniques and tools that can intelligently and automatically assist us in transferring this data into useful knowledge. Different techniques of data mining are developed which are helpful for handling these large size databases [7]. Data mining is also finding its role in the field of biotechnology. Pedigree means the associated ancestry of a crop variety. Genetic diversity is the variation in the genetic composition of individuals within or among species. Genetic diversity depends upon the pedigree information of the varieties. Parents at lower hierarchic levels have more weightage for predicting genetic diversity as compared to the upper hierarchic levels. The weightage decreases as the level increases. For crossbreeding, the two varieties should be more and more genetically diverse so as to incorporate the useful characters of the two varieties in the newly developed variety. This paper discusses the searching and analyzing of different possible pairs of varieties selected on the basis of morphological characters, Climatic conditions and Nutrients so as to obtain the most optimal pair that can produce the required crossbreed variety. An algorithm was developed to determine the genetic diversity between the selected wheat varieties. Cluster analysis technique is used for retrieving the results.

Keywords: Genetic diversity, pedigree, nutrients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
8648 Query Algebra for Semistuctured Data

Authors: Ei Ei Myat, Ni Lar Thein

Abstract:

With the tremendous growth of World Wide Web (WWW) data, there is an emerging need for effective information retrieval at the document level. Several query languages such as XML-QL, XPath, XQL, Quilt and XQuery are proposed in recent years to provide faster way of querying XML data, but they still lack of generality and efficiency. Our approach towards evolving a framework for querying semistructured documents is based on formal query algebra. Two elements are introduced in the proposed framework: first, a generic and flexible data model for logical representation of semistructured data and second, a set of operators for the manipulation of objects defined in the data model. In additional to accommodating several peculiarities of semistructured data, our model offers novel features such as bidirectional paths for navigational querying and partitions for data transformation that are not available in other proposals.

Keywords: Algebra, Semistructured data, Query Algebra.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
8647 Applying Gibbs Sampler for Multivariate Hierarchical Linear Model

Authors: Satoshi Usami

Abstract:

Among various HLM techniques, the Multivariate Hierarchical Linear Model (MHLM) is desirable to use, particularly when multivariate criterion variables are collected and the covariance structure has information valuable for data analysis. In order to reflect prior information or to obtain stable results when the sample size and the number of groups are not sufficiently large, the Bayes method has often been employed in hierarchical data analysis. In these cases, although the Markov Chain Monte Carlo (MCMC) method is a rather powerful tool for parameter estimation, Procedures regarding MCMC have not been formulated for MHLM. For this reason, this research presents concrete procedures for parameter estimation through the use of the Gibbs samplers. Lastly, several future topics for the use of MCMC approach for HLM is discussed.

Keywords: Gibbs sampler, Hierarchical Linear Model, Markov Chain Monte Carlo, Multivariate Hierarchical Linear Model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
8646 Study of the Particle Size Effect on Bubble Rise Velocities in a Three-Phase Bubble Column

Authors: Weiling Li, Wenqi Zhong, Baosheng Jin, Rui Xiao, Yong Lu, Tingting He

Abstract:

Experiments were performed in a three-phase bubble column to study variations of bubble rise velocities. The dynamic gas disengagement (DGD) technique and the fast response pressure transducers were utilized to investigate the bubble rise in the column. The superficial gas velocity of large bubbles and small bubbles, the rise velocities of larger and small bubble fractions were studied considering the effect of particle sizes. The results show that the superficial gas velocity associated with large bubbles linearly increase as superficial gas velocity increasing. Particle size has little effect on the both large and small bubble superficial gas velocities. The rise velocities of larger bubble fractions are larger than that of small bubble fractions, and it had different tendency at low and high superficial gas velocities when changing the particle sizes. The rise velocities of small bubble fractions increased and then had a decrease tendency when the particle size became greater.

Keywords: Bubble rise velocity, gas–liquid–solid, particle size effect, three–phase bubble column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3403
8645 Study on Cross-flow Heat Transfer in Fixed Bed

Authors: Hong-fang Ma, Hai-tao Zhang, Wei-yong Ying, Ding-ye Fang

Abstract:

Radial flow reactor was focused for large scale methanol synthesis and in which the heat transfer type was cross-flow. The effects of operating conditions including the reactor inlet air temperature, the heating pipe temperature and the air flow rate on the cross-flow heat transfer was investigated and the results showed that the temperature profile of the area in front of the heating pipe was slightly affected by all the operating conditions. The main area whose temperature profile was influenced was the area behind the heating pipe. The heat transfer direction according to the air flow directions. In order to provide the basis for radial flow reactor design calculation, the dimensionless number group method was used for data fitting of the bed effective thermal conductivity and the wall heat transfer coefficient which was calculated by the mathematical model with the product of Reynolds number and Prandtl number. The comparison of experimental data and calculated value showed that the calculated value fit the experimental data very well and the formulas could be used for reactor designing calculation.

Keywords: Cross-flow, Heat transfer, Fixed bed, Mathematical model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
8644 Energy Budget Equation of Superfluid HVBK Model: LES Simulation

Authors: M. Bakhtaoui, L. Merahi

Abstract:

The reliability of the filtered HVBK model is now investigated via some large eddy simulations (LES) of freely decaying isotropic superfluid turbulence. For homogeneous turbulence at very high Reynolds numbers, comparison of the terms in the spectral kinetic energy budget equation indicates, in the energy-containing range, that the production and energy transfer effects become significant except for dissipation. In the inertial range, where the two fluids are perfectly locked, the mutual friction maybe neglected with respect to other terms. Also, the LES results for the other terms of the energy balance are presented.

Keywords: Superfluid turbulence, HVBK, Energy budget, Large Eddy Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013
8643 Joint Use of Factor Analysis (FA) and Data Envelopment Analysis (DEA) for Ranking of Data Envelopment Analysis

Authors: Reza Nadimi, Fariborz Jolai

Abstract:

This article combines two techniques: data envelopment analysis (DEA) and Factor analysis (FA) to data reduction in decision making units (DMU). Data envelopment analysis (DEA), a popular linear programming technique is useful to rate comparatively operational efficiency of decision making units (DMU) based on their deterministic (not necessarily stochastic) input–output data and factor analysis techniques, have been proposed as data reduction and classification technique, which can be applied in data envelopment analysis (DEA) technique for reduction input – output data. Numerical results reveal that the new approach shows a good consistency in ranking with DEA.

Keywords: Effectiveness, Decision Making, Data EnvelopmentAnalysis, Factor Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425
8642 Novelty as a Measure of Interestingness in Knowledge Discovery

Authors: Vasudha Bhatnagar, Ahmed Sultan Al-Hegami, Naveen Kumar

Abstract:

Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules leads to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach based on both objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules (knowledge). We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are promising.

Keywords: Knowledge Discovery in Databases (KDD), Interestingness, Subjective Measures, Novelty Index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807
8641 Adaptive Few-Shot Deep Metric Learning

Authors: Wentian Shi, Daming Shi, Maysam Orouskhani, Feng Tian

Abstract:

Currently the most prevalent deep learning methods require a large amount of data for training, whereas few-shot learning tries to learn a model from limited data without extensive retraining. In this paper, we present a loss function based on triplet loss for solving few-shot problem using metric based learning. Instead of setting the margin distance in triplet loss as a constant number empirically, we propose an adaptive margin distance strategy to obtain the appropriate margin distance automatically. We implement the strategy in the deep siamese network for deep metric embedding, by utilizing an optimization approach by penalizing the worst case and rewarding the best. Our experiments on image recognition and co-segmentation model demonstrate that using our proposed triplet loss with adaptive margin distance can significantly improve the performance.

Keywords: Few-shot learning, triplet network, adaptive margin, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908
8640 Asymptotic Analysis of Instant Messaging Service with Relay Nodes

Authors: Muhammad T. Alam, Zheng Da Wu

Abstract:

In this paper, we provide complete end-to-end delay analyses including the relay nodes for instant messages. Message Session Relay Protocol (MSRP) is used to provide congestion control for large messages in the Instant Messaging (IM) service. Large messages are broken into several chunks. These chunks may traverse through a maximum number of two relay nodes before reaching destination according to the IETF specification of the MSRP relay extensions. We discuss the current solutions of sending large instant messages and introduce a proposal to reduce message flows in the IM service. We consider virtual traffic parameter i.e., the relay nodes are stateless non-blocking for scalability purpose. This type of relay node is also assumed to have input rate at constant bit rate. We provide a new scheduling policy that schedules chunks according to their previous node?s delivery time stamp tags. Validation and analysis is shown for such scheduling policy. The performance analysis with the model introduced in this paper is simple and straight forward, which lead to reduced message flows in the IM service.

Keywords: Instant messaging, stateless, chunking, MSRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
8639 Comparisons of Surveying with Terrestrial Laser Scanner and Total Station for Volume Determination of Overburden and Coal Excavations in Large Open-Pit Mine

Authors: B. Keawaram, P. Dumrongchai

Abstract:

The volume of overburden and coal excavations in open-pit mine is generally determined by conventional survey such as total station. This study aimed to evaluate the accuracy of terrestrial laser scanner (TLS) used to measure overburden and coal excavations, and to compare TLS survey data sets with the data of the total station. Results revealed that, the reference points measured with the total station showed 0.2 mm precision for both horizontal and vertical coordinates. When using TLS on the same points, the standard deviations of 4.93 cm and 0.53 cm for horizontal and vertical coordinates, respectively, were achieved. For volume measurements covering the mining areas of 79,844 m2, TLS yielded the mean difference of about 1% and the surface error margin of 6 cm at the 95% confidence level when compared to the volume obtained by total station.

Keywords: Mine, survey, terrestrial laser scanner, total station.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
8638 Empirical Roughness Progression Models of Heavy Duty Rural Pavements

Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed

Abstract:

Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.

Keywords: Roughness progression, empirical model, pavement performance, heavy duty pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 802