Search results for: Information andCommunication Technologies
2769 Fuzzy C-Means Clustering for Biomedical Documents Using Ontology Based Indexing and Semantic Annotation
Authors: S. Logeswari, K. Premalatha
Abstract:
Search is the most obvious application of information retrieval. The variety of widely obtainable biomedical data is enormous and is expanding fast. This expansion makes the existing techniques are not enough to extract the most interesting patterns from the collection as per the user requirement. Recent researches are concentrating more on semantic based searching than the traditional term based searches. Algorithms for semantic searches are implemented based on the relations exist between the words of the documents. Ontologies are used as domain knowledge for identifying the semantic relations as well as to structure the data for effective information retrieval. Annotation of data with concepts of ontology is one of the wide-ranging practices for clustering the documents. In this paper, indexing based on concept and annotation are proposed for clustering the biomedical documents. Fuzzy c-means (FCM) clustering algorithm is used to cluster the documents. The performances of the proposed methods are analyzed with traditional term based clustering for PubMed articles in five different diseases communities. The experimental results show that the proposed methods outperform the term based fuzzy clustering.
Keywords: MeSH Ontology, Concept Indexing, Annotation, semantic relations, Fuzzy c-means.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23032768 Development and Usability Assessment of a Connected Resistance Exercise Band Application for Strength-Monitoring
Authors: J. A. Batsis, G. G. Boateng, L. M. Seo, C. L. Petersen, K. L. Fortuna, E. V. Wechsler, R. J. Peterson, S. B. Cook, D. Pidgeon, R. S. Dokko, R. J. Halter, D. F. Kotz
Abstract:
Resistance exercise bands are a core component of any physical activity strengthening program. Strength training can mitigate the development of sarcopenia, the loss of muscle mass or strength and function with aging. Yet, the adherence of such behavioral exercise strategies in a home-based setting are fraught with issues of monitoring and compliance. Our group developed a Bluetooth-enabled resistance exercise band capable of transmitting data to an open-source platform. In this work, we developed an application to capture this information in real-time, and conducted three usability studies in two mixed-aged groups of participants (n=6 each) and a group of older adults with obesity participating in a weight-loss intervention (n=20). The system was favorable, acceptable and provided iterative information that could assist in future deployment on ubiquitous platforms. Our formative work provides the foundation to deliver home-based monitoring interventions in a high-risk, older adult population.
Keywords: Application, mHealth, older adult, resistance exercise band, sarcopenia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7602767 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform
Authors: Shih-Wen Hsiao, Yi-Cheng Tsao
Abstract:
In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.
Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple kinect sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22762766 Types of Epilepsies and Findings EEG- LORETA about Epilepsy
Authors: Leila Maleki, Ahmad Esmali Kooraneh, Hossein Taghi Derakhshi
Abstract:
Neural activity in the human brain starts from the early stages of prenatal development. This activity or signals generated by the brain are electrical in nature and represent not only the brain function but also the status of the whole body. At the present moment, three methods can record functional and physiological changes within the brain with high temporal resolution of neuronal interactions at the network level: the electroencephalogram (EEG), the magnet oencephalogram (MEG), and functional magnetic resonance imaging (fMRI); each of these has advantages and shortcomings. EEG recording with a large number of electrodes is now feasible in clinical practice. Multichannel EEG recorded from the scalp surface provides very valuable but indirect information about the source distribution. However, deep electrode measurements yield more reliable information about the source locations intracranial recordings and scalp EEG are used with the source imaging techniques to determine the locations and strengths of the epileptic activity. As a source localization method, Low Resolution Electro-Magnetic Tomography (LORETA) is solved for the realistic geometry based on both forward methods, the Boundary Element Method (BEM) and the Finite Difference Method (FDM). In this paper, we review the findings EEG- LORETA about epilepsy.Keywords: Epilepsy, EEG, EEG- Loreta, loreta analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30942765 Robotic Assistance in Nursing Care: Survey on Challenges and Scenarios
Authors: Pascal Gliesche, Kathrin Seibert, Christian Kowalski, Dominik Domhoff, Max Pfingsthorn, Karin Wolf-Ostermann, Andreas Hein
Abstract:
Robotic assistance in nursing care is an increasingly important area of research and development. Facing a shortage of labor and an increasing number of people in need of care, the German Nursing Care Innovation Center (Pflegeinnovationszentrum, PIZ) aims to address these challenges from the side of technology. Little is known about nurses experiences with existing robotic assistance systems. Especially nurses perspectives on starting points for the development of robotic solutions, that target recurring burdensome tasks in everyday nursing care, are of interest. This paper presents findings focusing on robotics resulting from an explanatory mixed-methods study on nurses experiences with and their expectations for innovative technologies in nursing care in stationary and ambulant care facilities and hospitals in Germany. Based on the findings, eight scenarios for robotic assistance are identified based on the real needs of practitioners. An initial system addressing a single use-case is described to show perspectives for the use of robots in nursing care.Keywords: Robotics and automation, engineering management, engineering in medicine and biology, medical services, public healthcare.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22402764 A Multimedia Telemonitoring Network for Healthcare
Authors: Hariton N. Costin, Sorin Puscoci, Cristian Rotariu, Bogdan Dionisie, Marinela C. Cimpoesu
Abstract:
TELMES project aims to develop a securized multimedia system devoted to medical consultation teleservices. It will be finalized with a pilot system for a regional telecenters network that connects local telecenters, having as support multimedia platforms. This network will enable the implementation of complex medical teleservices (teleconsulations, telemonitoring, homecare, urgency medicine, etc.) for a broader range of patients and medical professionals, mainly for family doctors and those people living in rural or isolated regions. Thus, a multimedia, scalable network, based on modern IT&C paradigms, will result. It will gather two inter-connected regional telecenters, in Iaşi and Piteşti, Romania, each of them also permitting local connections of hospitals, diagnostic and treatment centers, as well as local networks of family doctors, patients, even educational entities. As communications infrastructure, we aim to develop a combined fixmobile- internet (broadband) links. Other possible communication environments will be GSM/GPRS/3G and radio waves. The electrocardiogram (ECG) acquisition, internet transmission and local analysis, using embedded technologies, was already successfully done for patients- telemonitoring.Keywords: Healthcare, telemedicine, telemonitoring, ECG analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18212763 Analysis of Bio-Oil Produced by Pyrolysis of Coconut Shell
Authors: D. S. Fardhyanti, A. Damayanti
Abstract:
The utilization of biomass as a source of new and renewable energy is being carried out. One of the technologies to convert biomass as an energy source is pyrolysis which is converting biomass into more valuable products, such as bio-oil. Bio-oil is a liquid which is produced by steam condensation process from the pyrolysis of coconut shells. The composition of a coconut shell e.g. hemicellulose, cellulose and lignin will be oxidized to phenolic compounds as the main component of the bio-oil. The phenolic compounds in bio-oil are corrosive; they cause various difficulties in the combustion system because of a high viscosity, low calorific value, corrosiveness, and instability. Phenolic compounds are very valuable components which phenol has used as the main component for the manufacture of antiseptic, disinfectant (known as Lysol) and deodorizer. The experiments typically occurred at the atmospheric pressure in a pyrolysis reactor at temperatures ranging from 300 oC to 350 oC with a heating rate of 10 oC/min and a holding time of 1 hour at the pyrolysis temperature. The Gas Chromatography-Mass Spectroscopy (GC-MS) was used to analyze the bio-oil components. The obtained bio-oil has the viscosity of 1.46 cP, the density of 1.50 g/cm3, the calorific value of 16.9 MJ/kg, and the molecular weight of 1996.64. By GC-MS, the analysis of bio-oil showed that it contained phenol (40.01%), ethyl ester (37.60%), 2-methoxy-phenol (7.02%), furfural (5.45%), formic acid (4.02%), 1-hydroxy-2-butanone (3.89%), and 3-methyl-1,2-cyclopentanedione (2.01%).
Keywords: Bio-oil, pyrolysis, coconut shell, phenol, gas chromatography-mass spectroscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17722762 Impact of Modeling Different Fading Channels on Wireless MAN Fixed IEEE802.16d OFDM System with Diversity Transmission Technique
Authors: Shanar Askar, Shahzad Memon, LachhmanDas, MSKalhoro
Abstract:
Wimax (Worldwide Interoperability for Microwave Access) is a promising technology which can offer high speed data, voice and video service to the customer end, which is presently, dominated by the cable and digital subscriber line (DSL) technologies. The performance assessment of Wimax systems is dealt with. The biggest advantage of Broadband wireless application (BWA) over its wired competitors is its increased capacity and ease of deployment. The aims of this paper are to model and simulate the fixed OFDM IEEE 802.16d physical layer under variant combinations of digital modulation (BPSK, QPSK, and 16-QAM) over diverse combination of fading channels (AWGN, SUIs). Stanford University Interim (SUI) Channel serial was proposed to simulate the fixed broadband wireless access channel environments where IEEE 802.16d is to be deployed. It has six channel models that are grouped into three categories according to three typical different outdoor Terrains, in order to give a comprehensive effect of fading channels on the overall performance of the system.Keywords: WIMAX, OFDM, Additive White Gaussian Noise, Fading Channel, SUI, Doppler Effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21382761 Public Transport Planning System by Dijkstra Algorithm: Case Study Bangkok Metropolitan Area
Authors: Pimploi Tirastittam, Phutthiwat Waiyawuththanapoom
Abstract:
Nowadays the promotion of the public transportation system in the Bangkok Metropolitan Area is increased such as the “Free Bus for Thai Citizen” Campaign and the prospect of the several MRT routes to increase the convenient and comfortable to the Bangkok Metropolitan area citizens. But citizens do not make full use of them it because the citizens are lack of the data and information and also the confident to the public transportation system of Thailand especially in the time and safety aspects. This research is the Public Transport Planning System by Dijkstra Algorithm: Case Study Bangkok Metropolitan Area by focusing on buses, BTS and MRT schedules/routes to give the most information to passengers. They can choose the way and the routes easily by using Dijkstra STAR Algorithm of Graph Theory which also shows the fare of the trip. This Application was evaluated by 30 normal users to find the mean and standard deviation of the developed system. Results of the evaluation showed that system is at a good level of satisfaction (4.20 and 0.40). From these results we can conclude that the system can be used properly and effectively according to the objective.
Keywords: Dijkstra Algorithm, Graph Theory, Shortest Route, Public Transport, Bangkok Metropolitan Area.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63352760 Performance Determinants for Convenience Store Suppliers
Authors: Zainah Abdullah, Aznur Hajar Abdullah
Abstract:
This paper examines the impact of information and communication technology (ICT) usage, internal relationship, supplier-retailer relationship, logistics services and inventory management on convenience store suppliers- performance. Data was collected from 275 convenience store managers in Malaysia using a set of questionnaire. The multiple linear regression results indicate that inventory management, supplier-retailer relationship, logistics services and internal relationship are predictors of supplier performance as perceived by convenience store managers. However, ICT usage is not a predictor of supplier performance. The study focuses only on convenience stores and petrol station convenience stores and concentrates only on managers. The results provide insights to suppliers who serve convenience stores and possibly similar retail format on factors to consider in improving their service to retailers. The results also provide insights to government in its aspiration to improve business operations of convenience store to consider ways to enhance the adoption of ICT by retailers and suppliers.Keywords: Information and communication technology (ICT), internal relationship, inventory management, logistics services, supplier performance, supplier-retailer relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40092759 Multi-Agents Coordination Model in Inter- Organizational Workflow: Applying in Egovernment
Authors: E. Karoui Chaabane, S. Hadouaj, K. Ghedira
Abstract:
Inter-organizational Workflow (IOW) is commonly used to support the collaboration between heterogeneous and distributed business processes of different autonomous organizations in order to achieve a common goal. E-government is considered as an application field of IOW. The coordination of the different organizations is the fundamental problem in IOW and remains the major cause of failure in e-government projects. In this paper, we introduce a new coordination model for IOW that improves the collaboration between government administrations and that respects IOW requirements applied to e-government. For this purpose, we adopt a Multi-Agent approach, which deals more easily with interorganizational digital government characteristics: distribution, heterogeneity and autonomy. Our model integrates also different technologies to deal with the semantic and technologic interoperability. Moreover, it conserves the existing systems of government administrations by offering a distributed coordination based on interfaces communication. This is especially applied in developing countries, where administrations are not necessary equipped with workflow systems. The use of our coordination techniques allows an easier migration for an e-government solution and with a lower cost. To illustrate the applicability of the proposed model, we present a case study of an identity card creation in Tunisia.Keywords: E-government, Inter-organizational workflow, Multi-agent systems, Semantic web services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22782758 Examining Corporate Tax Evaders: Evidence from the Finalized Audit Cases
Authors: Ming Ling Lai, Zalilawati Yaacob, Normah Omar, Norashikin Abdul Aziz, Bee Wah Yap
Abstract:
This paper aims to (1) analyze the profiles of transgressors (detected evaders); (2) examine reason(s) that triggered a tax audit, causes of tax evasion, audit timeframe and tax penalty charged; and (3) to assess if tax auditors followed the guidelines as stated in the 'Tax Audit Framework' when conducting tax audits. In 2011, the Inland Revenue Board Malaysia (IRBM) had audited and finalized 557 company cases. With official permission, data of all the 557 cases were obtained from the IRBM. Of these, a total of 421 cases with complete information were analyzed. About 58.1% was small and medium corporations and from the construction industry (32.8%). The selection for tax audit was based on risk analysis (66.8%), information from third party (11.1%), and firm with low profitability or fluctuating profit pattern (7.8%). The three persistent causes of tax evasion by firms were over claimed expenses (46.8%), fraudulent reporting of income (38.5%) and overstating purchases (10.5%). These findings are consistent with past literature. Results showed that tax auditors took six to 18 months to close audit cases. More than half of tax evaders were fined 45% on additional tax raised during audit for the first offence. The study found tax auditors did follow the guidelines in the 'Tax Audit Framework' in audit selection, settlement and penalty imposition.Keywords: Corporate tax fraud, tax non-compliance, tax evasion, tax audit, fraudulent reporting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34302757 Metaverse as a Form of Reality and the Impact of Metaverse in Higher Education
Authors: Josefina Bengoechea, Alex Bell
Abstract:
In the metaverse, the characters were avatars working in a 3-dimensional virtual reality. This virtual reality existed beyond reality. The metaverse is a “the post-reality universe”; a perpetual and persistent multiuser environment in which physical reality and digital virtuality are merged. The virtual infrastructure needed to build a metaverse (which is in the process of being created), are: web3 technologies, non-fungible tokens (NFTs), blockchain, smart contracts, and cryptocurrencies. Web3 refers to a new iteration of the actual web2. The actual web2 is dominated by powerful providers like Google, Apple, Amazon, and other corporate tech companies. The vision for web3 is a decentralized, and thus more equitable version of the web. The aim of this paper is, first, to present the Metaverse as a form of reality in which physical reality and digital virtuality combined to provide new experiences to users; second, to discuss the implications for education, specifically for higher education, and how programs will have to be modified so that the skills obtained by graduates match those demanded by the virtual labour market. This paper builds upon a constructivist approach, combining a literature review and research on key publications.
Keywords: Ethics in technology, cross realities, cryptocurrencies, labour market, metaverse, technology in higher education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7042756 Determinants of Students- Intentions to Use a Mobile Messaging Service in Educational Institutions: a Theoretical Model
Authors: Boonlert Watjatrakul
Abstract:
Mobile marketing through mobile messaging service has highly impressive growth as it enables e-business firms to communicate with their customers effectively. Educational institutions hence start using this service to enhance communication with their students. Previous studies, however, have limited understanding of applying mobile messaging service in education. This study proposes a theoretical model to understand the drivers of students- intentions to use the university-s mobile messaging service. The model indicates that social influence, perceived control and attitudes affect students- intention to use the university-s mobile messaging service. It also provides five antecedents of students- attitudes–perceived utility (information utility, entertainment utility, and social utility), innovativeness, information seeking, transaction specificity (content specificity, sender specificity, and time specificity) and privacy concern. The proposed model enables universities to understand what students concern about the use of a mobile messaging service in universities and handle the service more effectively. The paper discusses the model development and concludes with limitations and implications of the proposed model.Keywords: education, intention, mobile marketing, mobile messaging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16862755 Influence of Online Sports Events on Betting among Nigerian Youth
Authors: B. O. Diyaolu
Abstract:
The opportunity provided by advances in technology as regard to sports betting is so numerous. Nigerian youth are not left out especially with the use of phones and visit to sports betting outlets. Today, it is more difficult to differentiate a true fan as there are quite a number of them that became fans as a result of betting on live games. This study investigated the influence of online sports events on betting among Nigerian youth. A descriptive survey research design was used and the population consists of all Nigerian youth that engages in betting and lives within the southwest zone of Nigeria. A simple random sampling technique was used to pick three states from the southwest zone of Nigeria. 2500 respondents comprising of males and females were sampled from the three states. A structured questionnaire on Online Sports Event Contribution to Sports Betting (OSECSB) was used. The instrument consists of three sections. Section A seeks information on the demographic data of the respondents. Section B seeks information on online sports events while section C was used to extract information on sports betting. The modified instrument which consists of 14 items has a reliability coefficient of 0.74. The hypothesis was tested at 0.05 significance level. The completed questionnaire was collated, coded, and analyzed using descriptive statistics of frequency counts, percentage and pie chart, and inferential statistics of multiple regressions. The findings of this study revealed that online sports betting is a significant predictor of an increase in sports betting among Nigerian youth. The media and television, as well as globalization and the internet, coupled with social media and various online platforms, have all contributed to the immense increase in sports betting. The increase in the advertisement of the betting platform during live matches, especially football, is becoming more alarming. In most organized international events, the media attention, as well as sponsorship rights, are now been given to one or two betting platforms. There is a need for all stakeholders to put in place school-based intervention programs to reorientate our youth about the consequences of addiction to betting. Such programs must include meta-analyses and emotional control towards sports betting.
Keywords: Betting platform, Nigerian fans, Nigerian youth, sports betting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4462754 A New Traffic Pattern Matching for DDoS Traceback Using Independent Component Analysis
Authors: Yuji Waizumi, Tohru Sato, Yoshiaki Nemoto
Abstract:
Recently, Denial of Service(DoS) attacks and Distributed DoS(DDoS) attacks which are stronger form of DoS attacks from plural hosts have become security threats on the Internet. It is important to identify the attack source and to block attack traffic as one of the measures against these attacks. In general, it is difficult to identify them because information about the attack source is falsified. Therefore a method of identifying the attack source by tracing the route of the attack traffic is necessary. A traceback method which uses traffic patterns, using changes in the number of packets over time as criteria for the attack traceback has been proposed. The traceback method using the traffic patterns can trace the attack by matching the shapes of input traffic patterns and the shape of output traffic pattern observed at a network branch point such as a router. The traffic pattern is a shapes of traffic and unfalsifiable information. The proposed trace methods proposed till date cannot obtain enough tracing accuracy, because they directly use traffic patterns which are influenced by non-attack traffics. In this paper, a new traffic pattern matching method using Independent Component Analysis(ICA) is proposed.
Keywords: Distributed Denial of Service, Independent Component Analysis, Traffic pattern
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17722753 Multidimensional Performance Tracking
Authors: C. Ardil
Abstract:
In this study, a model, together with a software tool that implements it, has been developed to determine the performance ratings of employees in an organization operating in the information technology sector using the indicators obtained from employees' online study data. Weighted Sum (WS) Method and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method based on multidimensional decision making approach were used in the study. WS and TOPSIS methods provide multidimensional decision making (MDDM) methods that allow all dimensions to be evaluated together considering specific weights, allowing employees to objectively evaluate the problem of online performance tracking. The application of WS and TOPSIS mathematical methods, which can combine alternatives with a large number of dimensions and reach simultaneous solution, has been implemented through an online performance tracking software. In the application of WS and TOPSIS methods, objective dimension weights were calculated by using entropy information (EI) and standard deviation (SD) methods from the data obtained by employees' online performance tracking method, decision matrix was formed by using performance scores for each employee, and a single performance score was calculated for each employee. Based on the calculated performance score, employees were given a performance evaluation decision. The results of Pareto set evidence and comparative mathematical analysis validate that employees' performance preference rankings in WS and TOPSIS methods are closely related. This suggests the compatibility, applicability, and validity of the proposed method to the MDDM problems in which a large number of alternative and dimension types are taken into account. With this study, an objective, realistic, feasible and understandable mathematical method, together with a software tool that implements it has been demonstrated. This is considered to be preferable because of the subjectivity, limitations and high cost of the methods traditionally used in the measurement and performance appraisal in the information technology sector.Keywords: Weighted sum, entropy ınformation, standard deviation, online performance tracking, performance evaluation, performance management, multidimensional decision making.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11102752 Semantically Enriched Web Usage Mining for Personalization
Authors: Suresh Shirgave, Prakash Kulkarni, José Borges
Abstract:
The continuous growth in the size of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and more sophisticated tools to help the Web user to find the desired information. In order to make Web more user friendly, it is necessary to provide personalized services and recommendations to the Web user. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of usage based techniques can be improved by integrating Web site content and site structure in the personalization process.
Herein, we propose semantically enriched Web Usage Mining method for Personalization (SWUMP), an extension to solely usage based technique. This approach is a combination of the fields of Web Usage Mining and Semantic Web. In the proposed method, we envisage enriching the undirected graph derived from usage data with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUMP generates accurate recommendations and is able to achieve 10-20% better accuracy than the solely usage based model. The SWUMP addresses the new item problem inherent to solely usage based techniques.
Keywords: Prediction, Recommendation, Semantic Web Usage Mining, Web Usage Mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30232751 A Noble Flow Rate Control based on Leaky Bucket Method for Multi-Media OBS Networks
Authors: Kentaro Miyoko, Yoshihiko Mori, Yugo Ikeda, Yoshihiro Nishino, Yong-Bok Choi, Hiromi Okada
Abstract:
Optical burst switching (OBS) has been proposed to realize the next generation Internet based on the wavelength division multiplexing (WDM) network technologies. In the OBS, the burst contention is one of the major problems. The deflection routing has been designed for resolving the problem. However, the deflection routing becomes difficult to prevent from the burst contentions as the network load becomes high. In this paper, we introduce a flow rate control methods to reduce burst contentions. We propose new flow rate control methods based on the leaky bucket algorithm and deflection routing, i.e. separate leaky bucket deflection method, and dynamic leaky bucket deflection method. In proposed methods, edge nodes which generate data bursts carry out the flow rate control protocols. In order to verify the effectiveness of the flow rate control in OBS networks, we show that the proposed methods improve the network utilization and reduce the burst loss probability through computer simulations.Keywords: Optical burst switching, OBS, flow rate control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17062750 Analysis of the Accuracy of Earth Movement with Drone Surveys
Authors: Raúl Pereda García, Julio Manuel de Luis Ruiz, Elena Castillo López, Rubén Pérez Álvarez, Felipe Piña García
Abstract:
New technologies for the capture of point clouds have experienced a great advance in recent years. In this way, its use has been extended in geomatics, providing measurement solutions that have been popularized without there being, many times, a detailed study of its accuracy. This research focuses on the study of the viability of topographic works with drones incorporating different sensors sensitive to the visible spectrum. The fundamentals have been applied to a road, located in Cantabria (Spain), where a platform extension and the reform of a riprap were being constructed. A total of six flights were made during two months, all of them with GPS as part of the photogrammetric process, and the results were contrasted with those measured with total station. The obtained results show that the choice of the camera and the planning of the flight have an important impact on the accuracy. In fact, the representations with a level of detail corresponding to 1/1000 scale are admissible, depending on the existing vegetation, and obtaining better results in the area of the riprap. This set of techniques is, therefore, suitable for the control of earthworks in road works but with certain limitations which are exposed in this paper.
Keywords: Drone, earth movement control, global position system, surveying technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7162749 Automatic Extraction of Roads from High Resolution Aerial and Satellite Images with Heavy Noise
Authors: Yan Li, Ronald Briggs
Abstract:
Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.
Keywords: Automatic road extraction, Image processing, Feature extraction, GIS update, Remote sensing, Geo-referencing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17012748 The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition
Authors: Fawaz S. Al-Anzi, Dia AbuZeina
Abstract:
Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.
Keywords: Speech recognition, acoustic features, Mel Frequency Cepstral Coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19732747 Parkinsons Disease Classification using Neural Network and Feature Selection
Authors: Anchana Khemphila, Veera Boonjing
Abstract:
In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It-s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor-s expertise and experience.But still cases are reported of wrong diagnosis and treatment. Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%.
Keywords: Data mining, classification, Parkinson disease, artificial neural networks, feature selection, information gain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37792746 Influence of Ambiguity Cluster on Quality Improvement in Image Compression
Authors: Safaa Al-Ali, Ahmad Shahin, Fadi Chakik
Abstract:
Image coding based on clustering provides immediate access to targeted features of interest in a high quality decoded image. This approach is useful for intelligent devices, as well as for multimedia content-based description standards. The result of image clustering cannot be precise in some positions especially on pixels with edge information which produce ambiguity among the clusters. Even with a good enhancement operator based on PDE, the quality of the decoded image will highly depend on the clustering process. In this paper, we introduce an ambiguity cluster in image coding to represent pixels with vagueness properties. The presence of such cluster allows preserving some details inherent to edges as well for uncertain pixels. It will also be very useful during the decoding phase in which an anisotropic diffusion operator, such as Perona-Malik, enhances the quality of the restored image. This work also offers a comparative study to demonstrate the effectiveness of a fuzzy clustering technique in detecting the ambiguity cluster without losing lot of the essential image information. Several experiments have been carried out to demonstrate the usefulness of ambiguity concept in image compression. The coding results and the performance of the proposed algorithms are discussed in terms of the peak signal-tonoise ratio and the quantity of ambiguous pixels.Keywords: Ambiguity Cluster, Anisotropic Diffusion, Fuzzy Clustering, Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15702745 Networking the Biggest Challenge in Hybrid Cloud Deployment
Authors: Aishwarya Shekhar, Devesh Kumar Srivastava
Abstract:
Cloud computing has emerged as a promising direction for cost efficient and reliable service delivery across data communication networks. The dynamic location of service facilities and the virtualization of hardware and software elements are stressing the communication networks and protocols, especially when data centres are interconnected through the internet. Although the computing aspects of cloud technologies have been largely investigated, lower attention has been devoted to the networking services without involving IT operating overhead. Cloud computing has enabled elastic and transparent access to infrastructure services without involving IT operating overhead. Virtualization has been a key enabler for cloud computing. While resource virtualization and service abstraction have been widely investigated, networking in cloud remains a difficult puzzle. Even though network has significant role in facilitating hybrid cloud scenarios, it hasn't received much attention in research community until recently. We propose Network as a Service (NaaS), which forms the basis of unifying public and private clouds. In this paper, we identify various challenges in adoption of hybrid cloud. We discuss the design and implementation of a cloud platform.Keywords: Cloud computing, networking, infrastructure, hybrid cloud, open stack, Naas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23172744 Multilevel Activation Functions For True Color Image Segmentation Using a Self Supervised Parallel Self Organizing Neural Network (PSONN) Architecture: A Comparative Study
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
The paper describes a self supervised parallel self organizing neural network (PSONN) architecture for true color image segmentation. The proposed architecture is a parallel extension of the standard single self organizing neural network architecture (SONN) and comprises an input (source) layer of image information, three single self organizing neural network architectures for segmentation of the different primary color components in a color image scene and one final output (sink) layer for fusion of the segmented color component images. Responses to the different shades of color components are induced in each of the three single network architectures (meant for component level processing) by applying a multilevel version of the characteristic activation function, which maps the input color information into different shades of color components, thereby yielding a processed component color image segmented on the basis of the different shades of component colors. The number of target classes in the segmented image corresponds to the number of levels in the multilevel activation function. Since the multilevel version of the activation function exhibits several subnormal responses to the input color image scene information, the system errors of the three component network architectures are computed from some subnormal linear index of fuzziness of the component color image scenes at the individual level. Several multilevel activation functions are employed for segmentation of the input color image scene using the proposed network architecture. Results of the application of the multilevel activation functions to the PSONN architecture are reported on three real life true color images. The results are substantiated empirically with the correlation coefficients between the segmented images and the original images.
Keywords: Colour image segmentation, fuzzy set theory, multi-level activation functions, parallel self-organizing neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20222743 Multi-VSS Scheme by Shifting Random Grids
Authors: Joy Jo-Yi Chang, Justie Su-Tzu Juan
Abstract:
Visual secret sharing (VSS) was proposed by Naor and Shamir in 1995. Visual secret sharing schemes encode a secret image into two or more share images, and single share image can’t obtain any information about the secret image. When superimposes the shares, it can restore the secret by human vision. Due to the traditional VSS have some problems like pixel expansion and the cost of sophisticated. And this method only can encode one secret image. The schemes of encrypting more secret images by random grids into two shares were proposed by Chen et al. in 2008. But when those restored secret images have much distortion, those schemes are almost limited in decoding. In the other words, if there is too much distortion, we can’t encrypt too much information. So, if we can adjust distortion to very small, we can encrypt more secret images. In this paper, four new algorithms which based on Chang et al.’s scheme be held in 2010 are proposed. First algorithm can adjust distortion to very small. Second algorithm distributes the distortion into two restored secret images. Third algorithm achieves no distortion for special secret images. Fourth algorithm encrypts three secret images, which not only retain the advantage of VSS but also improve on the problems of decoding.
Keywords: Visual cryptography, visual secret sharing, random grids, multiple, secret image sharing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15262742 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA
Authors: Deepti Tamrakar, Pritee Khanna
Abstract:
This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17402741 System and Method for Providing Web-Based Remote Application Service
Authors: Shuen-Tai Wang, Yu-Ching Lin, Hsi-Ya Chang
Abstract:
With the development of virtualization technologies, a new type of service named cloud computing service is produced. Cloud users usually encounter the problem of how to use the virtualized platform easily over the web without requiring the plug-in or installation of special software. The object of this paper is to develop a system and a method enabling process interfacing within an automation scenario for accessing remote application by using the web browser. To meet this challenge, we have devised a web-based interface that system has allowed to shift the GUI application from the traditional local environment to the cloud platform, which is stored on the remote virtual machine. We designed the sketch of web interface following the cloud virtualization concept that sought to enable communication and collaboration among users. We describe the design requirements of remote application technology and present implementation details of the web application and its associated components. We conclude that this effort has the potential to provide an elastic and resilience environment for several application services. Users no longer have to burden the system maintenances and reduce the overall cost of software licenses and hardware. Moreover, this remote application service represents the next step to the mobile workplace, and it lets user to use the remote application virtually from anywhere.
Keywords: Virtualization technology, virtualized platform, web interface, remote application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9992740 Estimation of Vertical Handover Probability in an Integrated UMTS and WLAN Networks
Authors: Diganta Kumar Pathak, Manashjyoti Bhuyan, Vaskar Deka
Abstract:
Vertical Handover(VHO) among different communication technologies ensuring uninterruption and service continuity is one of the most important performance parameter in Heterogenous networks environment. In an integrated Universal Mobile Telecommunicatin System(UMTS) and Wireless Local Area Network(WLAN), WLAN is given an inherent priority over UMTS because of its high data rates with low cost. Therefore mobile users want to be associated with WLAN maximum of the time while roaming, to enjoy best possible services with low cost. That encourages reduction of number of VHO. In this work the reduction of number of VHO with respect to varying number of WLAN Access Points(APs) in an integrated UMTS and WLAN network is investigated through simulation to provide best possible cost effective service to the users. The simulation has been carried out for an area (7800 × 9006)m2 where COST-231 Hata model and 3GPP (TR 101 112 V 3.1.0) specified models are used for WLAN and UMTS path loss models respectively. The handover decision is triggered based on the received signal level as compared to the fade margin. Fade margin gives a probabilistic measure of the reliability of the communication link. A relationship between number of WLAN APs and the number of VHO is also established in this work.
Keywords: VHO, UMTS, WLAN, MT, AP, BS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036