Search results for: real time data.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12976

Search results for: real time data.

9106 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Thuraya M. Qaradaghi, Newroz N. Abdulrazaq

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211
9105 DWM-CDD: Dynamic Weighted Majority Concept Drift Detection for Spam Mail Filtering

Authors: Leili Nosrati, Alireza Nemaney Pour

Abstract:

Although e-mail is the most efficient and popular communication method, unwanted and mass unsolicited e-mails, also called spam mail, endanger the existence of the mail system. This paper proposes a new algorithm called Dynamic Weighted Majority Concept Drift Detection (DWM-CDD) for content-based filtering. The design purposes of DWM-CDD are first to accurate the performance of the previously proposed algorithms, and second to speed up the time to construct the model. The results show that DWM-CDD can detect both sudden and gradual changes quickly and accurately. Moreover, the time needed for model construction is less than previously proposed algorithms.

Keywords: Concept drift, Content-based filtering, E-mail, Spammail.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
9104 Hybrid Intelligent Intrusion Detection System

Authors: Norbik Bashah, Idris Bharanidharan Shanmugam, Abdul Manan Ahmed

Abstract:

Intrusion Detection Systems are increasingly a key part of systems defense. Various approaches to Intrusion Detection are currently being used, but they are relatively ineffective. Artificial Intelligence plays a driving role in security services. This paper proposes a dynamic model Intelligent Intrusion Detection System, based on specific AI approach for intrusion detection. The techniques that are being investigated includes neural networks and fuzzy logic with network profiling, that uses simple data mining techniques to process the network data. The proposed system is a hybrid system that combines anomaly, misuse and host based detection. Simple Fuzzy rules allow us to construct if-then rules that reflect common ways of describing security attacks. For host based intrusion detection we use neural-networks along with self organizing maps. Suspicious intrusions can be traced back to its original source path and any traffic from that particular source will be redirected back to them in future. Both network traffic and system audit data are used as inputs for both.

Keywords: Intrusion Detection, Network Security, Data mining, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
9103 Monitoring of Water Pollution and Its Consequences: An Overview

Authors: N. Singh, N. Sharma, J. K. Katnoria

Abstract:

Water a vital component for all living forms is derived from variety of sources, including surface water (rivers, lakes, reservoirs and ponds) and ground water (aquifers). Over the years of time, water bodies are subjected to human interference regularly resulting in deterioration of water quality. Therefore, pollution of water bodies has become matter of global concern. As the water quality closely relate to human health, water analysis before usage is of immense importance. Improper management of water bodies can cause serious problems in availability and quality of water. The quality of water may be described according to their physico-chemical and microbiological characteristics. For effective maintenance of water quality through appropriate control measures, continuous monitoring of metals, physico-chemical and biological parameter is essential for the establishment of baseline data for the water quality in any study area. The present study has focused on to explore the status of water pollution in various areas and to estimate the magnitude of its toxicity using different bioassay.

Keywords: Genotoxicity, Heavy metals, Mutagenicity, Physico-chemical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3558
9102 String Matching using Inverted Lists

Authors: Chouvalit Khancome, Veera Boonjing

Abstract:

This paper proposes a new solution to string matching problem. This solution constructs an inverted list representing a  string pattern to be searched for. It then uses a new algorithm to process an input string in a single pass. The preprocessing phase  takes 1) time complexity O(m) 2) space complexity O(1) where m is  the length of pattern. The searching phase time complexity takes 1)  O(m+α ) in average case 2) O(n/m) in the best case and 3) O(n) in  the worst case, where α is the number of comparing leading to  mismatch and n is the length of input text.

Keywords: String matching, inverted list, inverted index, pattern, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
9101 Prediction of Phenolic Compound Migration Process through Soil Media using Artificial Neural Network Approach

Authors: Supriya Pal, Kalyan Adhikari, Somnath Mukherjee, Sudipta Ghosh

Abstract:

This study presents the application of artificial neural network for modeling the phenolic compound migration through vertical soil column. A three layered feed forward neural network with back propagation training algorithm was developed using forty eight experimental data sets obtained from laboratory fixed bed vertical column tests. The input parameters used in the model were the influent concentration of phenol(mg/L) on the top end of the soil column, depth of the soil column (cm), elapsed time after phenol injection (hr), percentage of clay (%), percentage of silt (%) in soils. The output of the ANN was the effluent phenol concentration (mg/L) from the bottom end of the soil columns. The ANN predicted results were compared with the experimental results of the laboratory tests and the accuracy of the ANN model was evaluated.

Keywords: Modeling, Neural Networks, Phenol, Soil media

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2145
9100 Color Constancy using Superpixel

Authors: Xingsheng Yuan, Zhengzhi Wang

Abstract:

Color constancy algorithms are generally based on the simplified assumption about the spectral distribution or the reflection attributes of the scene surface. However, in reality, these assumptions are too restrictive. The methodology is proposed to extend existing algorithm to applying color constancy locally to image patches rather than globally to the entire images. In this paper, a method based on low-level image features using superpixels is proposed. Superpixel segmentation partition an image into regions that are approximately uniform in size and shape. Instead of using entire pixel set for estimating the illuminant, only superpixels with the most valuable information are used. Based on large scale experiments on real-world scenes, it can be derived that the estimation is more accurate using superpixels than when using the entire image.

Keywords: color constancy, illuminant estimation, superpixel

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
9099 An Efficient and Generic Hybrid Framework for High Dimensional Data Clustering

Authors: Dharmveer Singh Rajput , P. K. Singh, Mahua Bhattacharya

Abstract:

Clustering in high dimensional space is a difficult problem which is recurrent in many fields of science and engineering, e.g., bioinformatics, image processing, pattern reorganization and data mining. In high dimensional space some of the dimensions are likely to be irrelevant, thus hiding the possible clustering. In very high dimensions it is common for all the objects in a dataset to be nearly equidistant from each other, completely masking the clusters. Hence, performance of the clustering algorithm decreases. In this paper, we propose an algorithmic framework which combines the (reduct) concept of rough set theory with the k-means algorithm to remove the irrelevant dimensions in a high dimensional space and obtain appropriate clusters. Our experiment on test data shows that this framework increases efficiency of the clustering process and accuracy of the results.

Keywords: High dimensional clustering, sub-space, k-means, rough set, discernibility matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950
9098 Creep Transition in a Thin Rotating Disc Having Variable Density with Inclusion

Authors: Pankaj, Sonia R. Bansal

Abstract:

Creep stresses and strain rates have been obtained for a thin rotating disc having variable density with inclusion by using Seth-s transition theory. The density of the disc is assumed to vary radially, i.e. ( ) 0 ¤ü ¤ü r/b m - = ; ¤ü 0 and m being real positive constants. It has been observed that a disc, whose density increases radially, rotates at higher angular speed, thus decreasing the possibility of a fracture at the bore, whereas for a disc whose density decreases radially, the possibility of a fracture at the bore increases.

Keywords: Elastic-Plastic, Inclusion, Rotating disc, Stress, Strain rates, Transition, variable density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
9097 Information and Communication Technologies vs. Education and Training: Contribution to Understand the Millennials’ Generational Effect

Authors: Fauquet-Alekhine Philippe

Abstract:

Information and Communication Technologies (ICT) are increasing in importance everyday, especially since the 90’s (last decade of birth for the Millennials generation). While social interactions involving the Millennials generation have been studied, a lack of investigation remains regarding the use of the ICT by this generation as well as the impact on outcomes in education and professional training. Observing and interviewing students preparing a MSc, we aimed at characterizing the interaction students-ICT during the courses. We found that up to 50% of the students (mainly female) could use ICT during courses at a rate of 0.84 occurrence/minutes for some of them, and they thought this involvement did not disturb learning, even was helpful. As recent researches show that multitasking leads people think they are much better than they actually are, further observations with assessments are needed to conclude whether or not the use ICT by students during the courses is a real strength.

Keywords: Education, ICT, generational effect, training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2140
9096 Accessible Facilities in Home Environment for Elderly Family Members in Sri Lanka

Authors: M. A. N. Rasanjalee Perera

Abstract:

The world is facing several problems due to increasing elderly population. In Sri Lanka, along with the complexity of the modern society and structural and functional changes of the family, “caring for elders” seems as an emerging social problem. This situation may intensify as the county is moving into a middle income society. Seeking higher education and related career opportunities, and urban living in modern housing are new trends, through which several problems are generated. Among many issues related with elders, “lack of accessible and appropriate facilities in their houses as well as public buildings” can be identified as a major problem. This study argues that welfare facilities provided for the elderly people, particularly in the home environment, in the country are not adequate. Modern housing features such as bathrooms, pantries, lobbies, and leisure areas etc. are questionable as to whether they match with elders’ physical and mental needs. Consequently, elders have to face domestic accidents and many other difficulties within their living environments. Records of hospitals in the country also proved this fact. Therefore, this study tries to identify how far modern houses are suited with elders’ needs. The study further questioned whether “aging” is a considerable matter when people are buying, planning and renovating houses. A randomly selected sample of 50 houses were observed and 50 persons were interviewed around the Maharagama urban area in Colombo district to obtain primary data, while relevant secondary data and information were used to have a depth analysis. The study clearly found that none of the houses included to the sample are considering elders’ needs in planning, renovating, or arranging the home. Instead, most of the families were giving priority to the rich and elegant appearance and modern facilities of the houses. Particularly, to the bathrooms, pantry, large setting areas, balcony, parking slots for two vehicles, ad parapet walls with roller-gates are the main concerns. A significant factor found here is that even though, many children of the aged are in middle age and reaching their older years at present, they do not plan their future living within a safe and comfortable home, despite that they are hoping to spent the latter part of their lives in the their current homes. This fact highlights that not only the other responsible parts of the society, but also those who are reaching their older ages are ignoring the problems of the aged. At the same time, it was found that more than 80% of old parents do not like to stay at their children’s homes as the living environments in such modern homes are not familiar or convenient for them. Due to this context, the aged in Sri Lanka may have to be alone in their own homes due to current trend of society of migrating to urban living in modern houses. At the same time, current urban families who live in modern houses may have to face adding accessible facilities in their home environment, as current modern housing facilities may not be appropriate them for a better life in their latter part of life.

Keywords: Aging population, elderly care, home environment, housing facilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
9095 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern

Authors: Rupesh K. Gopal, Saroj K. Meher

Abstract:

In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.

Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2813
9094 Influence of the Entropic Parameter on the Flow Geometry and Morphology

Authors: D. Mirauda, M. Greco, A. Volpe Plantamura

Abstract:

The necessity of updating the numerical models inputs, because of geometrical and resistive variations in rivers subject to solid transport phenomena, requires detailed control and monitoring activities. The human employment and financial resources of these activities moves the research towards the development of expeditive methodologies, able to evaluate the outflows through the measurement of more easily acquirable sizes. Recent studies highlighted the dependence of the entropic parameter on the kinematical and geometrical flow conditions. They showed a meaningful variability according to the section shape, dimension and slope. Such dependences, even if not yet well defined, could reduce the difficulties during the field activities, and also the data elaboration time. On the basis of such evidences, the relationships between the entropic parameter and the geometrical and resistive sizes, obtained through a large and detailed laboratory experience on steady free surface flows in conditions of macro and intermediate homogeneous roughness, are analyzed and discussed.

Keywords: Froude number, entropic parameter, roughness, water discharge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315
9093 The Effect of the Hourly Compensation on the Unemployment Rate: Comparative Analysis of United States, Canada and the United Kingdom Using Panel Data Regression Analysis

Authors: Ashiquer Rahman, Hares Mohammad, Ummey Salma

Abstract:

A country’s hourly compensation and unemployment rates are two of its most crucial components. They are not merely statistics but they have profound effects on individual, families, country, and the economy. They are inversely related to one another. The increased hourly compensation in the manufacturing sector can have a favorable effect on job changing issues. Moreover, the relationship between hourly compensation and unemployment is complex and influenced by broader economic factors. In this paper, in order to determine the effect of hourly compensation on unemployment rate, we use the panel data regression models and evaluate the expected link between hourly compensation and unemployment rate. We estimate the fixed effects model (FEM), evaluate the error components model (ECM), and determine which model (the FEM or ECM) is better through pooling all 60 observations. We then analyze and review the data by comparing countries (United States, Canada and the United Kingdom) using panel data regression models. Finally, we provide result, analysis and a summary of this extensive research on how the hourly compensation affects unemployment rate. Additionally, this paper offers relevant and useful guideline for the government and academic community to use an econometrics and social approach for the hourly compensation on unemployment rate to eliminate the problem.

Keywords: Hourly compensation, unemployment rate, panel data regression models, dummy variables, random effects model, fixed effects model, the linear regression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71
9092 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

Development of a method to estimate gene functions is an important task in bioinformatics. One of the approaches for the annotation is the identification of the metabolic pathway that genes are involved in. Since gene expression data reflect various intracellular phenomena, those data are considered to be related with genes’ functions. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: Metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336
9091 An Analysis of Compression Methods and Implementation of Medical Images in Wireless Network

Authors: C. Rajan, K. Geetha, S. Geetha

Abstract:

The motivation of image compression technique is to reduce the irrelevance and redundancy of the image data in order to store or pass data in an efficient way from one place to another place. There are several types of compression methods available. Without the help of compression technique, the file size is knowingly larger, usually several megabytes, but by doing the compression technique, it is possible to reduce file size up to 10% as of the original without noticeable loss in quality. Image compression can be lossless or lossy. The compression technique can be applied to images, audio, video and text data. This research work mainly concentrates on methods of encoding, DCT, compression methods, security, etc. Different methodologies and network simulations have been analyzed here. Various methods of compression methodologies and its performance metrics has been investigated and presented in a table manner.

Keywords: Image compression techniques, encoding, DCT, lossy compression, lossless compression, JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188
9090 Convergence Analysis of an Alternative Gradient Algorithm for Non-Negative Matrix Factorization

Authors: Chenxue Yang, Mao Ye, Zijian Liu, Tao Li, Jiao Bao

Abstract:

Non-negative matrix factorization (NMF) is a useful computational method to find basis information of multivariate nonnegative data. A popular approach to solve the NMF problem is the multiplicative update (MU) algorithm. But, it has some defects. So the columnwisely alternating gradient (cAG) algorithm was proposed. In this paper, we analyze convergence of the cAG algorithm and show advantages over the MU algorithm. The stability of the equilibrium point is used to prove the convergence of the cAG algorithm. A classic model is used to obtain the equilibrium point and the invariant sets are constructed to guarantee the integrity of the stability. Finally, the convergence conditions of the cAG algorithm are obtained, which help reducing the evaluation time and is confirmed in the experiments. By using the same method, the MU algorithm has zero divisor and is convergent at zero has been verified. In addition, the convergence conditions of the MU algorithm at zero are similar to that of the cAG algorithm at non-zero. However, it is meaningless to discuss the convergence at zero, which is not always the result that we want for NMF. Thus, we theoretically illustrate the advantages of the cAG algorithm.

Keywords: Non-negative matrix factorizations, convergence, cAG algorithm, equilibrium point, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
9089 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: Bottom elevation, multi-view stereo, river, structure-from-motion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
9088 On Convergence Property of MINRES Method for Solving a Complex Shifted Hermitian Linear System

Authors: Guiding Gu, Guo Liu

Abstract:

We discuss the convergence property of the minimum residual (MINRES) method for the solution of complex shifted Hermitian system (αI + H)x = f. Our convergence analysis shows that the method has a faster convergence than that for real shifted Hermitian system (Re(α)I + H)x = f under the condition Re(α) + λmin(H) > 0, and a larger imaginary part of the shift α has a better convergence property. Numerical experiments show such convergence properties.

Keywords: complex shifted linear system, Hermitian matrix, MINRES method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
9087 Frequency- and Content-Based Tag Cloud Font Distribution Algorithm

Authors: Ágnes Bogárdi-Mészöly, Takeshi Hashimoto, Shohei Yokoyama, Hiroshi Ishikawa

Abstract:

The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources to describe and organize them. Tag clouds provide rough impression of relative importance of each tag within overall cloud in order to facilitate browsing among numerous tags and resources. The goal of our paper is to enrich visualization of tag clouds. A font distribution algorithm has been proposed to calculate a novel metric based on frequency and content, and to classify among classes from this metric based on power law distribution and percentages. The suggested algorithm has been validated and verified on the tag cloud of a real-world thesis portal.

Keywords: Tag cloud, font distribution algorithm, frequency-based, content-based, power law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2093
9086 Negative Temperature Dependence of a Gravity - A Reality

Authors: Alexander L. Dmitriev, Sophia A. Bulgakova

Abstract:

Temperature dependence of force of gravitation is one of the fundamental problems of physics. This problem has got special value in connection with that the general theory of relativity, supposing the weakest positive influence of a body temperature on its weight, actually rejects an opportunity of measurement of negative influence of temperature on gravity in laboratory conditions. Really, the recognition of negative temperature dependence of gravitation, for example, means basic impossibility of achievement of a singularity («a black hole») at a gravitational collapse. Laboratory experiments with exact weighing the heated up metal samples, indicating negative influence temperatures of bodies on their physical weight are described. Influence of mistakes of measurements is analyzed. Calculations of distribution of temperature in volume of the bar, agreed with experimental data of time dependence of weight of samples are executed. The physical substantiation of negative temperature dependence of weight of the bodies, based on correlation of acceleration at thermal movement of micro-particles of a body and its absolute temperature, are given.

Keywords: Gravitation, temperature, weight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
9085 A Meta-Analytic Path Analysis of e-Learning Acceptance Model

Authors: David W.S. Tai, Ren-Cheng Zhang, Sheng-Hung Chang, Chin-Pin Chen, Jia-Ling Chen

Abstract:

This study reports results of a meta-analytic path analysis e-learning Acceptance Model with k = 27 studies, Databases searched included Information Sciences Institute (ISI) website. Variables recorded included perceived usefulness, perceived ease of use, attitude toward behavior, and behavioral intention to use e-learning. A correlation matrix of these variables was derived from meta-analytic data and then analyzed by using structural path analysis to test the fitness of the e-learning acceptance model to the observed aggregated data. Results showed the revised hypothesized model to be a reasonable, good fit to aggregated data. Furthermore, discussions and implications are given in this article.

Keywords: E-learning, Meta Analytic Path Analysis, Technology Acceptance Model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2446
9084 Agent-Based Simulation for Supply Chain Transport Corridors

Authors: Kamalendu Pal

Abstract:

Supply chains are the backbone of trade and commerce. Their logistics use different transport corridors on regular basis for operational purpose. The international supply chain transport corridors include different infrastructure elements (e.g. weighbridge, package handling equipments, border clearance authorities, and so on). This paper presents the use of multi-agent systems (MAS) to model and simulate some aspects of transportation corridors, and in particular the area of weighbridge resource optimization for operational profit. An underlying multi-agent model provides a means of modeling the relationships among stakeholders in order to enable coordination in a transport corridor environment. Simulations of the costs of container unloading, reloading, and waiting time for queuing up tracks have been carried out using data sets. Results of the simulation provide the potential guidance in making decisions about optimal service resource allocation in a trade corridor.

Keywords: Multi-agent systems, simulation, supply chain, transport corridor, weighbridge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
9083 Biogas Potentiality of Agro-wastes Jatropha Fruit Coat

Authors: M.S. Dhanya, N. Gupta, H.C. Joshi, Lata

Abstract:

The present investigation was undertaken to explore the biogas potentiality of Jatropha (Jatropha curcas, Euphorbiaceae) Fruit Coat (JFC) alone and in combination with cattle dung (CD) in various proportions at 15 per cent total solids by batch phase anaerobic digestion for a period of ten weeks HRT (Hydraulic Retention Time) under a temperature of 35°C+1°C. The maximum biogas production was noticed in Cattle dung and Jatropha Fruit Coat in 2:1 ratio with 403.84 L/kg dry matter followed by 3:1,1:2, 1:1 and 1:3 having 329.66, 219.77, 217.79, 203.64 L /kg dm respectively as compared to 178.49 L/kg dm in CD alone. The JFC alone found to produce 91 per cent of total biogas that obtained from Cattle dung. The per cent methane content of the biogas in all the treatments was found on par with Cattle dung.

Keywords: Jatropha Fruit Coat, Cattle dung, Hydraulic Retention Time, Dry matter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
9082 Destination Port Detection for Vessels: An Analytic Tool for Optimizing Port Authorities Resources

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages Automatic Identification System (AIS) messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring AIS messages. Our RRo method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measures to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Frechet Distance (DFD), Dynamic Time ´ Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an f-measure of 99.08% using Dynamic Time Warping (DTW) similarity measure.

Keywords: Spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
9081 An Efficient Ant Colony Optimization Algorithm for Multiobjective Flow Shop Scheduling Problem

Authors: Ahmad Rabanimotlagh

Abstract:

In this paper an ant colony optimization algorithm is developed to solve the permutation flow shop scheduling problem. In the permutation flow shop scheduling problem which has been vastly studied in the literature, there are a set of m machines and a set of n jobs. All the jobs are processed on all the machines and the sequence of jobs being processed is the same on all the machines. Here this problem is optimized considering two criteria, makespan and total flow time. Then the results are compared with the ones obtained by previously developed algorithms. Finally it is visible that our proposed approach performs best among all other algorithms in the literature.

Keywords: Scheduling, Flow shop, Ant colony optimization, Makespan, Flow time

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2421
9080 An Efficient Algorithm for Reliability Lower Bound of Distributed Systems

Authors: Mohamed H. S. Mohamed, Yang Xiao-zong, Liu Hong-wei, Wu Zhi-bo

Abstract:

The reliability of distributed systems and computer networks have been modeled by a probabilistic network or a graph G. Computing the residual connectedness reliability (RCR), denoted by R(G), under the node fault model is very useful, but is an NP-hard problem. Since it may need exponential time of the network size to compute the exact value of R(G), it is important to calculate its tight approximate value, especially its lower bound, at a moderate calculation time. In this paper, we propose an efficient algorithm for reliability lower bound of distributed systems with unreliable nodes. We also applied our algorithm to several typical classes of networks to evaluate the lower bounds and show the effectiveness of our algorithm.

Keywords: Distributed systems, probabilistic network, residual connectedness reliability, lower bound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
9079 Model Order Reduction of Linear Time Variant High Speed VLSI Interconnects using Frequency Shift Technique

Authors: J.V.R.Ravindra, M.B.Srinivas,

Abstract:

Accurate modeling of high speed RLC interconnects has become a necessity to address signal integrity issues in current VLSI design. To accurately model a dispersive system of interconnects at higher frequencies; a full-wave analysis is required. However, conventional circuit simulation of interconnects with full wave models is extremely CPU expensive. We present an algorithm for reducing large VLSI circuits to much smaller ones with similar input-output behavior. A key feature of our method, called Frequency Shift Technique, is that it is capable of reducing linear time-varying systems. This enables it to capture frequency-translation and sampling behavior, important in communication subsystems such as mixers, RF components and switched-capacitor filters. Reduction is obtained by projecting the original system described by linear differential equations into a lower dimension. Experiments have been carried out using Cadence Design Simulator cwhich indicates that the proposed technique achieves more % reduction with less CPU time than the other model order reduction techniques existing in literature. We also present applications to RF circuit subsystems, obtaining size reductions and evaluation speedups of orders of magnitude with insignificant loss of accuracy.

Keywords: Model order Reduction, RLC, crosstalk

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
9078 The Integration of Patient Health Record Generated from Wearable and Internet of Things Devices into Health Information Exchanges

Authors: Dalvin D. Hill, Hector M. Castro Garcia

Abstract:

A growing number of individuals utilize wearable devices on a daily basis. The usage and functionality of these wearable devices vary from user to user. One popular usage of said devices is to track health-related activities that are typically stored on a device’s memory or uploaded to an account in the cloud; based on the current trend, the data accumulated from the wearable device are stored in a standalone location. In many of these cases, this health related datum is not a factor when considering the holistic view of a user’s health lifestyle or record. This health-related data generated from wearable and Internet of Things (IoT) devices can serve as empirical information to a medical provider, as the standalone data can add value to the holistic health record of a patient. This paper proposes a solution to incorporate the data gathered from these wearable and IoT devices, with that a patient’s Personal Health Record (PHR) stored within the confines of a Health Information Exchange (HIE).

Keywords: Electronic health record, health information exchanges, Internet of Things, personal health records, wearable devices, wearables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
9077 Application of De Novo Programming Approach for Optimizing the Business Process

Authors: Z. Babic, I. Veza, A. Balic, M. Crnjac

Abstract:

The linear programming model is sometimes difficult to apply in real business situations due to its assumption of proportionality. This paper shows an example of how to use De Novo programming approach instead of linear programming. In the De Novo programming, resources are not fixed like in linear programming but resource quantities depend only on available budget. Budget is a new, important element of the De Novo approach. Two different production situations are presented: increasing costs and quantity discounts of raw materials. The focus of this paper is on advantages of the De Novo approach in the optimization of production plan for production company which produces souvenirs made from famous stone from the island of Brac, one of the greatest islands from Croatia.

Keywords: De Novo Programming, production plan, stone souvenirs, variable prices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1247