Search results for: HTML
32 An Optimal Algorithm for HTML Page Building Process
Authors: Maryam Jasim Abdullah, Bassim. H. Graimed, Jalal. S. Hameed
Abstract:
Demand over web services is in growing with increases number of Web users. Web service is applied by Web application. Web application size is affected by its user-s requirements and interests. Differential in requirements and interests lead to growing of Web application size. The efficient way to save store spaces for more data and information is achieved by implementing algorithms to compress the contents of Web application documents. This paper introduces an algorithm to reduce Web application size based on reduction of the contents of HTML files. It removes unimportant contents regardless of the HTML file size. The removing is not ignored any character that is predicted in the HTML building process.
Keywords: HTML code, HTML tag, WEB applications, Document compression, DOM tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 203831 A Formatting Method for Transforming XML Data into HTML
Authors: Zhe JIN, Motomichi TOYAMA
Abstract:
In this paper, we propose a fixed formatting method of PPX(Pretty Printer for XML). PPX is a query language for XML database which has extensive formatting capability that produces HTML as the result of a query. The fixed formatting method is to completely specify the combination of variables and layout specification operators within the layout expression of the GENERATE clause of PPX. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing the same tasks.
Keywords: PPX, XML, HTML, XSLT, XQuery, fixed formatting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136430 Approximately Similarity Measurement of Web Sites Using Genetic Algorithms and Binary Trees
Authors: Doru Anastasiu Popescu, Dan Rădulescu
Abstract:
In this paper, we determine the similarity of two HTML web applications. We are going to use a genetic algorithm in order to determine the most significant web pages of each application (we are not going to use every web page of a site). Using these significant web pages, we will find the similarity value between the two applications. The algorithm is going to be efficient because we are going to use a reduced number of web pages for comparisons but it will return an approximate value of the similarity. The binary trees are used to keep the tags from the significant pages. The algorithm was implemented in Java language.
Keywords: Tag, HTML, web page, genetic algorithm, similarity value, binary tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 131029 A Proposal of an Automatic Formatting Method for Transforming XML Data
Authors: Zhe JIN, Motomichi TOYAMA
Abstract:
PPX(Pretty Printer for XML) is a query language that offers a concise description method of formatting the XML data into HTML. In this paper, we propose a simple specification of formatting method that is a combination description of automatic layout operators and variables in the layout expression of the GENERATE clause of PPX. This method can automatically format irregular XML data included in a part of XML with layout decision rule that is referred to DTD. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing same tasks.
Keywords: PPX, Irregular XML data, Layout decision rule, HTML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141828 HTML5 Online Learning Application with Offline Web, Location Based, Animated Web, Multithread, and Real-Time Features
Authors: Sheetal R. Jadhwani, Daisy Sang, Chang-Shyh Peng
Abstract:
Web applications are an integral part of modem life. They are mostly based upon the HyperText Markup Language (HTML). While HTML meets the basic needs, there are some shortcomings. For example, applications can cease to work once user goes offline, real-time updates may be lagging, and user interface can freeze on computationally intensive tasks. The latest language specification HTML5 attempts to rectify the situation with new tools and protocols. This paper studies the new Web Storage, Geolocation, Web Worker, Canvas, and Web Socket APIs, and presents applications to test their features and efficiencies.Keywords: HTML5, Web Worker, Canvas, Web Socket.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210427 Compression of Semistructured Documents
Authors: Leo Galambos, Jan Lansky, Katsiaryna Chernik
Abstract:
EGOTHOR is a search engine that indexes the Web and allows us to search the Web documents. Its hit list contains URL and title of the hits, and also some snippet which tries to shortly show a match. The snippet can be almost always assembled by an algorithm that has a full knowledge of the original document (mostly HTML page). It implies that the search engine is required to store the full text of the documents as a part of the index. Such a requirement leads us to pick up an appropriate compression algorithm which would reduce the space demand. One of the solutions could be to use common compression methods, for instance gzip or bzip2, but it might be preferable if we develop a new method which would take advantage of the document structure, or rather, the textual character of the documents. There already exist a special compression text algorithms and methods for a compression of XML documents. The aim of this paper is an integration of the two approaches to achieve an optimal level of the compression ratioKeywords: Compression, search engine, HTML, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157726 An Evaluation of the Usability of IT Faculty Educational Portal at University of Benghazi
Authors: Nasser M. Amaitik, Mohammed J. El-Sahli
Abstract:
Evaluation of educational portals is an important subject area that needs more attention from researchers. A university that has an educational portal which is difficult to use and interact by teachers or students or management staff can reduce the position and reputation of the university. Therefore, it is important to have the ability to make an evaluation of the quality of e-services the university provide to improve them over time. The present study evaluates the usability of the Information Technology Faculty portal at University of Benghazi. Two evaluation methods were used: a questionnaire-based method and an online automated tool-based method. The first method was used to measure the portal's external attributes of usability (Information, Content and Organization of the portal, Navigation, Links and Accessibility, Aesthetic and Visual Appeal, Performance and Effectiveness and educational purpose) from users' perspectives, while the second method was used to measure the portal's internal attributes of usability (number and size of HTML files, number and size of images, load time, HTML check errors, browsers compatibility problems, number of bad and broken links), which cannot be perceived by the users. The study showed that some of the usability aspects have been found at the acceptable level of performance and quality, and some others have been found otherwise. In general, it was concluded that the usability of IT faculty educational portal generally acceptable. Recommendations and suggestions to improve the weakness and quality of the portal usability are presented in this study.Keywords: Automated tools-based evaluation, Educational portals, Evaluation criteria, Questionnaire-based evaluation, Usability evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200225 Restructuring of XML Documents in the Form of Ontologies
Authors: Jamal Bakkas, Mohamed Bahaj, Abdellatif Soklabi
Abstract:
The intense use of the web has made it a very changing environment, its content is in permanent evolution to adapt to the demands. The standards have accompanied this evolution by passing from standards that regroup data with their presentations without any structuring such as HTML, to standards that separate both and give more importance to the structural aspect of the content such as XML standard and its derivatives. Currently, with the appearance of the Semantic Web, ontologies become increasingly present on the web and standards that allow their representations as OWL and RDF/RDFS begin to gain momentum. This paper provided an automatic method that converts XML schema document to ontologies represented in OWL.
Keywords: XML Schema, OWL, RDB, Mapping, Ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 237824 The Usefulness of Logical Structure in Flexible Document Categorization
Authors: Jebari Chaker, Ounalli Habib
Abstract:
This paper presents a new approach for automatic document categorization. Exploiting the logical structure of the document, our approach assigns a HTML document to one or more categories (thesis, paper, call for papers, email, ...). Using a set of training documents, our approach generates a set of rules used to categorize new documents. The approach flexibility is carried out with rule weight association representing your importance in the discrimination between possible categories. This weight is dynamically modified at each new document categorization. The experimentation of the proposed approach provides satisfactory results.Keywords: categorization rule, document categorization, flexible categorization, logical structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 124623 An Empirical Analysis of Arabic WebPages Classification using Fuzzy Operators
Authors: Ahmad T. Al-Taani, Noor Aldeen K. Al-Awad
Abstract:
In this study, a fuzzy similarity approach for Arabic web pages classification is presented. The approach uses a fuzzy term-category relation by manipulating membership degree for the training data and the degree value for a test web page. Six measures are used and compared in this study. These measures include: Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and Bounded Difference approaches. These measures are applied and compared using 50 different Arabic web pages. Einstein measure was gave best performance among the other measures. An analysis of these measures and concluding remarks are drawn in this study.Keywords: Text classification, HTML documents, Web pages, Machine learning, Fuzzy logic, Arabic Web pages.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190722 A Comparative Analysis of Different Web Content Mining Tools
Authors: T. Suresh Kumar, M. Arthanari, N. Shanthi
Abstract:
Nowadays, the Web has become one of the most pervasive platforms for information change and retrieval. It collects the suitable and perfectly fitting information from websites that one requires. Data mining is the form of extracting data’s available in the internet. Web mining is one of the elements of data mining Technique, which relates to various research communities such as information recovery, folder managing system and simulated intellects. In this Paper we have discussed the concepts of Web mining. We contain generally focused on one of the categories of Web mining, specifically the Web Content Mining and its various farm duties. The mining tools are imperative to scanning the many images, text, and HTML documents and then, the result is used by the various search engines. We conclude by presenting a comparative table of these tools based on some pertinent criteria.
Keywords: Data Mining, Web Mining, Web Content Mining, Mining Tools, Information retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355421 The Overall Aspects of E-Leaning Issues, Developments, Opportunities and Challenges
Authors: Bala Dhandayuthapani Veerasamy
Abstract:
Rapid steps made in the field of Information and Communication Technology (ICT) has facilitated the development of teaching and learning methods and prepared them to serve the needs of an assorted educational institution. In other words, the information age has redefined the fundamentals and transformed the institutions and method of services delivery forever. The vision is the articulation of a desire to transform the method of teaching and learning could proceed through e-learning. E-learning is commonly deliberated to use of networked information and communications technology in teaching and learning practice. This paper deals the general aspects of the e-leaning with its issues, developments, opportunities and challenges, which can the higher institutions own.
Keywords: 2D & 3D Animations, challenges, E-learning, Flash, HTML, issues, Multimedia, opportunities, VRML
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208020 A Comparative Study of Web-pages Classification Methods using Fuzzy Operators Applied to Arabic Web-pages
Authors: Ahmad T. Al-Taani, Noor Aldeen K. Al-Awad
Abstract:
In this study, a fuzzy similarity approach for Arabic web pages classification is presented. The approach uses a fuzzy term-category relation by manipulating membership degree for the training data and the degree value for a test web page. Six measures are used and compared in this study. These measures include: Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and Bounded Difference approaches. These measures are applied and compared using 50 different Arabic web-pages. Einstein measure was gave best performance among the other measures. An analysis of these measures and concluding remarks are drawn in this study.
Keywords: Text classification, HTML, web pages, machine learning, fuzzy logic, Arabic web pages.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 223619 SIP Authentication Scheme using ECDH
Authors: Aytunc Durlanik, Ibrahim Sogukpinar
Abstract:
SIP (Session Initiation Protocol), using HTML based call control messaging which is quite simple and efficient, is being replaced for VoIP networks recently. As for authentication and authorization purposes there are many approaches and considerations for securing SIP to eliminate forgery on the integrity of SIP messages. On the other hand Elliptic Curve Cryptography has significant advantages like smaller key sizes, faster computations on behalf of other Public Key Cryptography (PKC) systems that obtain data transmission more secure and efficient. In this work a new approach is proposed for secure SIP authentication by using a public key exchange mechanism using ECC. Total execution times and memory requirements of proposed scheme have been improved in comparison with non-elliptic approaches by adopting elliptic-based key exchange mechanism.Keywords: SIP, Elliptic Curve Cryptography, voice over IP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252918 Web Application Security, Attacks and Mitigation
Authors: Ayush Chugh, Gaurav Gupta
Abstract:
Today’s technology is heavily dependent on web applications. Web applications are being accepted by users at a very rapid pace. These have made our work efficient. These include webmail, online retail sale, online gaming, wikis, departure and arrival of trains and flights and list is very long. These are developed in different languages like PHP, Python, C#, ASP.NET and many more by using scripts such as HTML and JavaScript. Attackers develop tools and techniques to exploit web applications and legitimate websites. This has led to rise of web application security; which can be broadly classified into Declarative Security and Program Security. The most common attacks on the applications are by SQL Injection and XSS which give access to unauthorized users who totally damage or destroy the system. This paper presents a detailed literature description and analysis on Web Application Security, examples of attacks and steps to mitigate the vulnerabilities.
Keywords: Attacks, Injection, JavaScript, SQL, Vulnerability, XSS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 496817 SeqWord Gene Island Sniffer: a Program to Study the Lateral Genetic Exchange among Bacteria
Authors: Bezuidt O., Lima-Mendez G., Reva O. N.
Abstract:
SeqWord Gene Island Sniffer, a new program for the identification of mobile genetic elements in sequences of bacterial chromosomes is presented. This program is based on the analysis of oligonucleotide usage variations in DNA sequences. 3,518 mobile genetic elements were identified in 637 bacterial genomes and further analyzed by sequence similarity and the functionality of encoded proteins. The results of this study are stored in an open database http://anjie.bi.up.ac.za/geidb/geidbhome. php). The developed computer program and the database provide the information valuable for further investigation of the distribution of mobile genetic elements and virulence factors among bacteria. The program is available for download at www.bi.up.ac.za/SeqWord/sniffer/index.html.Keywords: mobile genetic elements, virulence, bacterial genomes
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174616 A Web Text Mining Flexible Architecture
Authors: M. Castellano, G. Mastronardi, A. Aprile, G. Tarricone
Abstract:
Text Mining is an important step of Knowledge Discovery process. It is used to extract hidden information from notstructured o semi-structured data. This aspect is fundamental because much of the Web information is semi-structured due to the nested structure of HTML code, much of the Web information is linked, much of the Web information is redundant. Web Text Mining helps whole knowledge mining process to mining, extraction and integration of useful data, information and knowledge from Web page contents. In this paper, we present a Web Text Mining process able to discover knowledge in a distributed and heterogeneous multiorganization environment. The Web Text Mining process is based on flexible architecture and is implemented by four steps able to examine web content and to extract useful hidden information through mining techniques. Our Web Text Mining prototype starts from the recovery of Web job offers in which, through a Text Mining process, useful information for fast classification of the same are drawn out, these information are, essentially, job offer place and skills.Keywords: Web text mining, flexible architecture, knowledgediscovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 266515 Feature Selection for Web Page Classification Using Swarm Optimization
Authors: B. Leela Devi, A. Sankar
Abstract:
The web’s increased popularity has included a huge amount of information, due to which automated web page classification systems are essential to improve search engines’ performance. Web pages have many features like HTML or XML tags, hyperlinks, URLs and text contents which can be considered during an automated classification process. It is known that Webpage classification is enhanced by hyperlinks as it reflects Web page linkages. The aim of this study is to reduce the number of features to be used to improve the accuracy of the classification of web pages. In this paper, a novel feature selection method using an improved Particle Swarm Optimization (PSO) using principle of evolution is proposed. The extracted features were tested on the WebKB dataset using a parallel Neural Network to reduce the computational cost.
Keywords: Web page classification, WebKB Dataset, Term Frequency-Inverse Document Frequency (TF-IDF), Particle Swarm Optimization (PSO).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 326014 Improving Performance of World Wide Web by Adaptive Web Traffic Reduction
Authors: Achuthsankar S. Nair, J. S. Jayasudha
Abstract:
The ever increasing use of World Wide Web in the existing network, results in poor performance. Several techniques have been developed for reducing web traffic by compressing the size of the file, saving the web pages at the client side, changing the burst nature of traffic into constant rate etc. No single method was adequate enough to access the document instantly through the Internet. In this paper, adaptive hybrid algorithms are developed for reducing web traffic. Intelligent agents are used for monitoring the web traffic. Depending upon the bandwidth usage, user-s preferences, server and browser capabilities, intelligent agents use the best techniques to achieve maximum traffic reduction. Web caching, compression, filtering, optimization of HTML tags, and traffic dispersion are incorporated into this adaptive selection. Using this new hybrid technique, latency is reduced to 20 – 60 % and cache hit ratio is increased 40 – 82 %.Keywords: Bandwidth, Congestion, Intelligent Agents, Prefetching, Web Caching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174313 Information Extraction from Unstructured and Ungrammatical Data Sources for Semantic Annotation
Authors: Quratulain N. Rajput, Sajjad Haider, Nasir Touheed
Abstract:
The internet has become an attractive avenue for global e-business, e-learning, knowledge sharing, etc. Due to continuous increase in the volume of web content, it is not practically possible for a user to extract information by browsing and integrating data from a huge amount of web sources retrieved by the existing search engines. The semantic web technology enables advancement in information extraction by providing a suite of tools to integrate data from different sources. To take full advantage of semantic web, it is necessary to annotate existing web pages into semantic web pages. This research develops a tool, named OWIE (Ontology-based Web Information Extraction), for semantic web annotation using domain specific ontologies. The tool automatically extracts information from html pages with the help of pre-defined ontologies and gives them semantic representation. Two case studies have been conducted to analyze the accuracy of OWIE.Keywords: Ontology, Semantic Annotation, Wrapper, Information Extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210912 The Video Database for Teaching and Learning in Football Refereeing
Authors: M. Armenteros, A. Domínguez, M. Fernández, A. J. Benítez
Abstract:
The following paper describes the video database tool used by the Fédération Internationale de Football Association (FIFA) as part of the research project developed in collaboration with the Carlos III University of Madrid. The database project began in 2012, with the aim of creating an educational tool for the training of instructors, referees and assistant referees, and it has been used in all FUTURO III courses since 2013. The platform now contains 3,135 video clips of different match situations from FIFA competitions. It has 1,835 users (FIFA instructors, referees and assistant referees). In this work, the main features of the database are described, such as the use of a search tool and the creation of multimedia presentations and video quizzes. The database has been developed in MySQL, ActionScript, Ruby on Rails and HTML. This tool has been rated by users as "very good" in all courses, which prompt us to introduce it as an ideal tool for any other sport that requires the use of video analysis.Keywords: Video database, FIFA, refereeing, e-learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 131611 An Intelligent System for Phish Detection, using Dynamic Analysis and Template Matching
Authors: Chinmay Soman, Hrishikesh Pathak, Vishal Shah, Aniket Padhye, Amey Inamdar
Abstract:
Phishing, or stealing of sensitive information on the web, has dealt a major blow to Internet Security in recent times. Most of the existing anti-phishing solutions fail to handle the fuzziness involved in phish detection, thus leading to a large number of false positives. This fuzziness is attributed to the use of highly flexible and at the same time, highly ambiguous HTML language. We introduce a new perspective against phishing, that tries to systematically prove, whether a given page is phished or not, using the corresponding original page as the basis of the comparison. It analyzes the layout of the pages under consideration to determine the percentage distortion between them, indicative of any form of malicious alteration. The system design represents an intelligent system, employing dynamic assessment which accurately identifies brand new phishing attacks and will prove effective in reducing the number of false positives. This framework could potentially be used as a knowledge base, in educating the internet users against phishing.Keywords: World Wide Web, Phishing, Internet security, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 183310 A study of Cancer-related MicroRNAs through Expression Data and Literature Search
Authors: Chien-Hung Huang, Chia-Wei Weng, Chang-Chih Chiang, Shih-Hua Wu, Chih-Hsien Huang, Ka-Lok Ng
Abstract:
MicroRNAs (miRNAs) are a class of non-coding RNAs that hybridize to mRNAs and induce either translation repression or mRNA cleavage. Recently, it has been reported that miRNAs could possibly play an important role in human diseases. By integrating miRNA target genes, cancer genes, miRNA and mRNA expression profiles information, a database is developed to link miRNAs to cancer target genes. The database provides experimentally verified human miRNA target genes information, including oncogenes and tumor suppressor genes. In addition, fragile sites information for miRNAs, and the strength of the correlation of miRNA and its target mRNA expression level for nine tissue types are computed, which serve as an indicator for suggesting miRNAs could play a role in human cancer. The database is freely accessible at http://ppi.bioinfo.asia.edu.tw/mirna_target/index.html.Keywords: MicroRNA, miRNA expression profile, mRNAexpression profile, cancer genes, oncogene, tumor suppressor gene
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15359 Opinion Mining Framework in the Education Domain
Authors: A. M. H. Elyasir, K. S. M. Anbananthen
Abstract:
The internet is growing larger and becoming the most popular platform for the people to share their opinion in different interests. We choose the education domain specifically comparing some Malaysian universities against each other. This comparison produces benchmark based on different criteria shared by the online users in various online resources including Twitter, Facebook and web pages. The comparison is accomplished using opinion mining framework to extract, process the unstructured text and classify the result to positive, negative or neutral (polarity). Hence, we divide our framework to three main stages; opinion collection (extraction), unstructured text processing and polarity classification. The extraction stage includes web crawling, HTML parsing, Sentence segmentation for punctuation classification, Part of Speech (POS) tagging, the second stage processes the unstructured text with stemming and stop words removal and finally prepare the raw text for classification using Named Entity Recognition (NER). Last phase is to classify the polarity and present overall result for the comparison among the Malaysian universities. The final result is useful for those who are interested to study in Malaysia, in which our final output declares clear winners based on the public opinions all over the web.
Keywords: Entity Recognition, Education Domain, Opinion Mining, Unstructured Text.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29658 CVOIP-FRU: Comprehensive VoIP Forensics Report Utility
Authors: Alejandro Villegas, Cihan Varol
Abstract:
Voice over Internet Protocol (VoIP) products is an emerging technology that can contain forensically important information for a criminal activity. Without having the user name and passwords, this forensically important information can still be gathered by the investigators. Although there are a few VoIP forensic investigative applications available in the literature, most of them are particularly designed to collect evidence from the Skype product. Therefore, in order to assist law enforcement with collecting forensically important information from variety of Betamax VoIP tools, CVOIP-FRU framework is developed. CVOIP-FRU provides a data gathering solution that retrieves usernames, contact lists, as well as call and SMS logs from Betamax VoIP products. It is a scripting utility that searches for data within the registry, logs and the user roaming profiles in Windows and Mac OSX operating systems. Subsequently, it parses the output into readable text and html formats. One superior way of CVOIP-FRU compared to the other applications that due to intelligent data filtering capabilities and cross platform scripting back end of CVOIP-FRU, it is expandable to include other VoIP solutions as well. Overall, this paper reveals the exploratory analysis performed in order to find the key data paths and locations, the development stages of the framework, and the empirical testing and quality assurance of CVOIP-FRU.
Keywords: Betamax, digital forensics, report utility, VoIP, VoIP Buster, VoIPWise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31247 Ethereum Based Smart Contracts for Trade and Finance
Authors: Rishabh Garg
Abstract:
Traditionally, business parties build trust with a centralized operating mechanism, such as payment by letter of credit. However, the increase in cyber-attacks and malicious hacking has jeopardized business operations and finance practices. Emerging markets, due to their high banking risks and the large presence of digital financing, are looking for technology that enables transparency and traceability of any transaction in trade, finance or supply chain management. Blockchain systems, in the absence of any central authority, enable transactions across the globe with the help of decentralized applications. DApps consist of a front-end, a blockchain back-end, and middleware, that is, the code that connects the two. The front-end can be a sophisticated web app or mobile app, which is used to implement the functions/methods on the smart contract. Web apps can employ technologies such as HTML, CSS, React and Express. In this wake, fintech and blockchain products are popping up in brokerages, digital wallets, exchanges, post-trade clearance, settlement, middleware, infrastructure and base protocols. The present paper provides a technology driven solution, financial inclusion and innovative working paradigm for business and finance.
Keywords: Authentication, blockchain, channel, cryptography, DApps, data portability, Decentralized Public Key Infrastructure, Ethereum, hash function, Hashgraph, Privilege creep, Proof of Work algorithm, revocation, storage variables, Zero Knowledge Proof.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5836 A Modification of Wireless and Internet Technologies for Logistics- Analysis
Authors: Apiwat Sangnoree
Abstract:
This research is designed for helping a WAPbased mobile phone-s user in order to analyze of logistics in the traffic area by applying and designing the accessible processes from mobile user to server databases. The research-s design comprises Mysql 4.1.8-nt database system for being the server which there are three sub-databases, traffic light – times of intersections in periods of the day, distances on the road of area-blocks where are divided from the main sample-area and speeds of sample vehicles (motorcycle, personal car and truck) in periods of the day. For interconnections between the server and user, PHP is used to calculate distances and travelling times from the beginning point to destination, meanwhile XHTML applied for receiving, sending and displaying data from PHP to user-s mobile. In this research, the main sample-area is focused at the Huakwang-Ratchada-s area, Bangkok, Thailand where usually the congested point and 6.25 km2 surrounding area which are split into 25 blocks, 0.25 km2 for each. For simulating the results, the designed server-database and all communicating models of this research have been uploaded to www.utccengineering.com/m4tg and used the mobile phone which supports WAP 2.0 XHTML/HTML multimode browser for observing values and displayed pictures. According to simulated results, user can check the route-s pictures from the requiring point to destination along with analyzed consuming times when sample vehicles travel in various periods of the day.
Keywords: WAP, logistics, XHTML, internet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14395 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes
Authors: Ritwik Dutta, Marilyn Wolf
Abstract:
This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.
Keywords: Flask, Java, JavaScript, health monitoring, long term care, Mongo, Python, smart home, software engineering, webserver.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21344 A Hybrid Ontology Based Approach for Ranking Documents
Authors: Sarah Motiee, Azadeh Nematzadeh, Mehrnoush Shamsfard
Abstract:
Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16303 ORank: An Ontology Based System for Ranking Documents
Authors: Mehrnoush Shamsfard, Azadeh Nematzadeh, Sarah Motiee
Abstract:
Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques for extracting phrases and stemming words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.Keywords: Document ranking, Ontology, Spread activation algorithm, Annotation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888