Search results for: asymmetric information
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3974

Search results for: asymmetric information

2744 Designing Pictogram for Food Portion Size

Authors: Y.C. Liu, S.J. Lu, Y.C. Weng, H. Su

Abstract:

The objective of this paper is to investigate a new approach based on the idea of pictograms for food portion size. This approach adopts the model of the United States Pharmacopeia- Drug Information (USP-DI). The representation of each food portion size composed of three parts: frame, the connotation of dietary portion sizes and layout. To investigate users- comprehension based on this approach, two experiments were conducted, included 122 Taiwanese people, 60 male and 62 female with ages between 16 and 64 (divided into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean correcting rate of the understanding level of food items is 48.54% (S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The difference on the correct rates for different age groups is significant (P*=0.00<0.05). In Experiment 2, the correcting rate of selecting the right life-size measurement aid is 65.02% (S.D.=21.31). The result showed the potential of the approach for certain food potion sizes. Issues raised for discussions including comprehension on numerous food varieties in an open environment, selection of photograph or drawing, reasons of different correcting rates for the measurement aid. This research also could be used for those interested in systematic and pictorial representation of dietary portion size information.

Keywords: Comprehension, Food Portion Size, Model of DietaryInformation, Pictogram Design, USP-DI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
2743 Effective Collaboration in Product Development via a Common Sharable Ontology

Authors: Sihem Mostefai, Abdelaziz Bouras, Mohamed Batouche

Abstract:

To achieve competitive advantage nowadays, most of the industrial companies are considering that success is sustained to great product development. That is to manage the product throughout its entire lifetime ranging from design, manufacture, operation and destruction. Achieving this goal requires a tight collaboration between partners from a wide variety of domains, resulting in various product data types and formats, as well as different software tools. So far, the lack of a meaningful unified representation for product data semantics has slowed down efficient product development. This paper proposes an ontology based approach to enable such semantic interoperability. Generic and extendible product ontology is described, gathering main concepts pertaining to the mechanical field and the relations that hold among them. The ontology is not exhaustive; nevertheless, it shows that such a unified representation is possible and easily exploitable. This is illustrated thru a case study with an example product and some semantic requests to which the ontology responds quite easily. The study proves the efficiency of ontologies as a support to product data exchange and information sharing, especially in product development environments where collaboration is not just a choice but a mandatory prerequisite.

Keywords: Information exchange, product lifecyclemanagement, product ontology, semantic interoperability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
2742 Anomaly Detection using Neuro Fuzzy system

Authors: Fatemeh Amiri, Caro Lucas, Nasser Yazdani

Abstract:

As the network based technologies become omnipresent, demands to secure networks/systems against threat increase. One of the effective ways to achieve higher security is through the use of intrusion detection systems (IDS), which are a software tool to detect anomalous in the computer or network. In this paper, an IDS has been developed using an improved machine learning based algorithm, Locally Linear Neuro Fuzzy Model (LLNF) for classification whereas this model is originally used for system identification. A key technical challenge in IDS and LLNF learning is the curse of high dimensionality. Therefore a feature selection phase is proposed which is applicable to any IDS. While investigating the use of three feature selection algorithms, in this model, it is shown that adding feature selection phase reduces computational complexity of our model. Feature selection algorithms require the use of a feature goodness measure. The use of both a linear and a non-linear measure - linear correlation coefficient and mutual information- is investigated respectively

Keywords: anomaly Detection, feature selection, Locally Linear Neuro Fuzzy (LLNF), Mutual Information (MI), liner correlation coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2153
2741 Modeling Uncertainty in Multiple Criteria Decision Making Using the Technique for Order Preference by Similarity to Ideal Solution for the Selection of Stealth Combat Aircraft

Authors: C. Ardil

Abstract:

Uncertainty set theory is a generalization of fuzzy set theory and intuitionistic fuzzy set theory. It serves as an effective tool for dealing with inconsistent, imprecise, and vague information. The technique for order preference by similarity to ideal solution (TOPSIS) method is a multiple-attribute method used to identify solutions from a finite set of alternatives. It simultaneously minimizes the distance from an ideal point and maximizes the distance from a nadir point. In this paper, an extension of the TOPSIS method for multiple attribute group decision-making (MAGDM) based on uncertainty sets is presented. In uncertainty decision analysis, decision-makers express information about attribute values and weights using uncertainty numbers to select the best stealth combat aircraft.

Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, TOPSIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92
2740 Individual Differences and Paired Learning in Virtual Environments

Authors: Patricia M. Boechler, Heather M. Gautreau

Abstract:

In this research study, postsecondary students completed an information learning task in an avatar-based 3D virtual learning environment. Three factors were of interest in relation to learning; 1) the influence of collaborative vs. independent conditions, 2) the influence of the spatial arrangement of the virtual environment (linear, random and clustered), and 3) the relationship of individual differences such as spatial skill, general computer experience and video game experience to learning. Students completed pretest measures of prior computer experience and prior spatial skill. Following the premeasure administration, students were given instruction to move through the virtual environment and study all the material within 10 information stations. In the collaborative condition, students proceeded in randomly assigned pairs, while in the independent condition they proceeded alone. After this learning phase, all students individually completed a multiple choice test to determine information retention. The overall results indicated that students in pairs did not perform any better or worse than independent students. As far as individual differences, only spatial ability predicted the performance of students. General computer experience and video game experience did not. Taking a closer look at the pairs and spatial ability, comparisons were made on pairs high/matched spatial ability, pairs low/matched spatial ability and pairs that were mismatched on spatial ability. The results showed that both high/matched pairs and mismatched pairs outperformed low/matched pairs. That is, if a pair had even one individual with strong spatial ability they would perform better than pairs with only low spatial ability individuals. This suggests that, in virtual environments, the specific individuals that are paired together are important for performance outcomes. The paper also includes a discussion of trends within the data that have implications for virtual environment education.

Keywords: Avatar-based, virtual environment, paired learning, individual differences.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 752
2739 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft

Authors: A. Maddi, A. Guessoum, D. Berkani

Abstract:

The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.

Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2446
2738 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks

Authors: Siliang Wang, Minghui Wang, Jun Hu

Abstract:

A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.

Keywords: pruning method, stochastic, time-varying networks, optimal path planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
2737 A Wavelet Based Object Watermarking System for Image and Video

Authors: Abdessamad Essaouabi, Ibnelhaj Elhassane

Abstract:

Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, objectbased watermarking schemes are needed for copyright protection. This paper proposes a novel blind object watermarking scheme for images and video using the in place lifting shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy image/video compression (e.g. JPEG, JPEG2000 and MPEG-4), scaling, adding noise, filtering, etc.

Keywords: Watermark, visual model, robustness, in place lifting shape adaptive-discrete wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
2736 An Exploratory Study Regarding the Effects of Auditor Switch, Auditee’s Industry, and Auditee’s Location on Audit Fees in Australia

Authors: Ashkan Mirzay Fashami

Abstract:

This study examines the effects of auditor switch, auditee’s industry, and auditee’s location on audit fees in Australia. It uses fee data of Australian Securities Exchange 500 companies, considering all industry classifications throughout the country from 2006 until 2016. Main findings show that auditor switch does not affect audit fees. However, auditee’s industry affects audit fees. This effect occurs in information technology, financials, energy, and materials sectors among the top 500 companies. Financials, energy, and materials sectors face a fee rise, whereas information technology has a fee cut. The extent of fee changes is different among various industries, wherein the financial sector has the highest increase. Further, auditee’s location affects audit fees. Top 500 companies in Hobart, Perth, and Brisbane face a fee reduction, wherein the highest cut is in Hobart. Further analysis suggests that the Australian audit market is being increasingly concentrated in the hands of the Big Four audit firms.

Keywords: Audit fee, auditor switch, Australia, industry, location.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 889
2735 An Information Theoretic Approach to Rescoring Peptides Produced by De Novo Peptide Sequencing

Authors: John R. Rose, James P. Cleveland, Alvin Fox

Abstract:

Tandem mass spectrometry (MS/MS) is the engine driving high-throughput protein identification. Protein mixtures possibly representing thousands of proteins from multiple species are treated with proteolytic enzymes, cutting the proteins into smaller peptides that are then analyzed generating MS/MS spectra. The task of determining the identity of the peptide from its spectrum is currently the weak point in the process. Current approaches to de novo sequencing are able to compute candidate peptides efficiently. The problem lies in the limitations of current scoring functions. In this paper we introduce the concept of proteome signature. By examining proteins and compiling proteome signatures (amino acid usage) it is possible to characterize likely combinations of amino acids and better distinguish between candidate peptides. Our results strongly support the hypothesis that a scoring function that considers amino acid usage patterns is better able to distinguish between candidate peptides. This in turn leads to higher accuracy in peptide prediction.

Keywords: Tandem mass spectrometry, proteomics, scoring, peptide, de novo, mutual information

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
2734 Automatic Building an Extensive Arabic FA Terms Dictionary

Authors: El-Sayed Atlam, Masao Fuketa, Kazuhiro Morita, Jun-ichi Aoe

Abstract:

Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.

Keywords: Arabic Field Association Terms, information extraction, document classification, information retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
2733 Portable Virtual Piano Design

Authors: Yu-Xiang Zhao, Chien-Hsing Chou, Mu-Chun Su, Yi-Zeng Hsieh

Abstract:

The purpose of this study is to design a portable virtual piano. By utilizing optical fiber gloves and the virtual piano software designed by this study, the user can play the piano anywhere at any time. This virtual piano consists of three major parts: finger tapping identification, hand movement and positioning identification, and MIDI software sound effect simulation. To play the virtual piano, the user wears optical fiber gloves and simulates piano key tapping motions. The finger bending information detected by the optical fiber gloves can tell when piano key tapping motions are made. Images captured by a video camera are analyzed, hand locations and moving directions are positioned, and the corresponding scales are found. The system integrates finger tapping identification with information about hand placement in relation to corresponding piano key positions, and generates MIDI piano sound effects based on this data. This experiment shows that the proposed method achieves an accuracy rate of 95% for determining when a piano key is tapped.

Keywords: virtual piano, portable, identification, optical fibergloves.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
2732 Analysis of Meteorological Drought Using Standardized Precipitation Index – A Case Study of Puruliya District, West Bengal, India

Authors: Moumita Palchaudhuri, Sujata Biswas

Abstract:

Drought is universally acknowledged as a phenomenon associated with scarcity of water. The Standardized Precipitation Index (SPI) expresses the actual rainfall as standardized departure from rainfall probability distribution function. In this study severity and spatial pattern of meteorological drought was analyzed in the Puruliya District, West Bengal, India using multi-temporal SPI. Daily gridded data for the period 1971-2005 from 4 rainfall stations surrounding the study area were collected from IMD, Pune, and used in the analysis. Geographic Information System (GIS) was used to generate drought severity maps for the different time scales and months of the year. Temporal SPI graphs show that the maximum SPI value (extreme drought) occurs in station 3 in the year 1993. Mild and moderate droughts occur in the central portion of the study area. Severe and extreme droughts were mostly found in the northeast, northwest and the southwest part of the region.

Keywords: Standardized Precipitation Index, Meteorological Drought, Geographical Information System, Drought severity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4923
2731 Spatial Pattern and GIS-Based Model for Risk Assessment – A Case Study of Dusit District, Bangkok

Authors: Morakot Worachairungreung

Abstract:

The objectives of the research are to study patterns of fire location distribution and develop techniques of Geographic Information System application in fire risk assessment for fire planning and management. Fire risk assessment was based on two factors: the vulnerability factor such as building material types, building height, building density and capacity for mitigation factor such as accessibility by road, distance to fire station, distance to hydrants and it was obtained from four groups of stakeholders including firemen, city planners, local government officers and local residents. Factors obtained from all stakeholders were converted into Raster data of GIS and then were superimposed on the data in order to prepare fire risk map of the area showing level of fire risk ranging from high to low. The level of fire risk was obtained from weighted mean of each factor based on the stakeholders. Weighted mean for each factor was obtained by Analytical Hierarchy Analysis.

Keywords: Fire Risk Assessment, Geographic Information System: GIS, Raster Analysis and Analytical Hierarchy Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
2730 Automatic Change Detection for High-Resolution Satellite Images of Urban and Suburban Areas

Authors: Antigoni Panagiotopoulou, Lemonia Ragia

Abstract:

High-resolution satellite images can provide detailed information about change detection on the earth. In the present work, QuickBird images of spatial resolution 60 cm/pixel and WorldView images of resolution 30 cm/pixel are utilized to perform automatic change detection in urban and suburban areas of Crete, Greece. There is a relative time difference of 13 years among the satellite images. Multiindex scene representation is applied on the images to classify the scene into buildings, vegetation, water and ground. Then, automatic change detection is made possible by pixel-per-pixel comparison of the classified multi-temporal images. The vegetation index and the water index which have been developed in this study prove effective. Furthermore, the proposed change detection approach not only indicates whether changes have taken place or not but also provides specific information relative to the types of changes. Experimentations with other different scenes in the future could help optimize the proposed spectral indices as well as the entire change detection methodology.

Keywords: Change detection, multiindex scene representation, spectral index, QuickBird, WorldView.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 450
2729 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: Data fusion, Dempster-Shafer theory, data mining, event detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
2728 A Differential Calculus Based Image Steganography with Crossover

Authors: Srilekha Mukherjee, Subha Ash, Goutam Sanyal

Abstract:

Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.

Keywords: Steganography, Crossover, Differential Calculus, Peak Signal to Noise Ratio, Cross-correlation Coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
2727 Named Entity Recognition using Support Vector Machine: A Language Independent Approach

Authors: Asif Ekbal, Sivaji Bandyopadhyay

Abstract:

Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.

Keywords: Named Entity (NE), Named Entity Recognition (NER), Support Vector Machine (SVM), Bengali, Hindi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3351
2726 Predicting Protein-Protein Interactions from Protein Sequences Using Phylogenetic Profiles

Authors: Omer Nebil Yaveroglu, Tolga Can

Abstract:

In this study, a high accuracy protein-protein interaction prediction method is developed. The importance of the proposed method is that it only uses sequence information of proteins while predicting interaction. The method extracts phylogenetic profiles of proteins by using their sequence information. Combining the phylogenetic profiles of two proteins by checking existence of homologs in different species and fitting this combined profile into a statistical model, it is possible to make predictions about the interaction status of two proteins. For this purpose, we apply a collection of pattern recognition techniques on the dataset of combined phylogenetic profiles of protein pairs. Support Vector Machines, Feature Extraction using ReliefF, Naive Bayes Classification, K-Nearest Neighborhood Classification, Decision Trees, and Random Forest Classification are the methods we applied for finding the classification method that best predicts the interaction status of protein pairs. Random Forest Classification outperformed all other methods with a prediction accuracy of 76.93%

Keywords: Protein Interaction Prediction, Phylogenetic Profile, SVM , ReliefF, Decision Trees, Random Forest Classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
2725 Ethics in the Technology Driven Enterprise

Authors: Bobbie Green, James A. Nelson

Abstract:

Innovations in technology have created new ethical challenges. Essential use of electronic communication in the workplace has escalated at an astronomical rate over the past decade. As such, legal and ethical dilemmas confronted by both the employer and the employee concerning managerial control and ownership of einformation have increased dramatically in the USA. From the employer-s perspective, ownership and control of all information created for the workplace is an undeniable source of economic advantage and must be monitored zealously. From the perspective of the employee, individual rights, such as privacy, freedom of speech, and freedom from unreasonable search and seizure, continue to be stalwart legal guarantees that employers are not legally or ethically entitled to abridge in the workplace. These issues have been the source of great debate and the catalyst for legal reform. The fine line between ethical and legal has been complicated by emerging technologies. This manuscript will identify and discuss a number of specific legal and ethical issues raised by the dynamic electronic workplace and conclude with suggestions that employers should follow to respect the delicate balance between employees- legal rights to privacy and the employer's right to protect its knowledge systems and infrastructure.

Keywords: Information, ethics, legal, privacy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035
2724 Investigating the UAE Residential Valuation System: A Framework for Analysis

Authors: Simon Huston, Ebraheim Lahbash, Ali Parsa

Abstract:

The development of the United Arab Emirates (UAE) into a regional trade, tourism, finance and logistics hub has transformed its real estate markets. However, speculative activity and price volatility remain concerns. UAE residential market values (MV) are exposed to fluctuations in capital flows and migration which, in turn, are affected by geopolitical uncertainty, oil price volatility and global investment market sentiment. Internally, a complex interplay between administrative boundaries, land tenure, building quality and evolving location characteristics fragments UAE residential property markets. In short, the UAE Residential Valuation System (UAE-RVS) confronts multiple challenges to collect, filter and analyze relevant information in complex and dynamic spatial and capital markets. A robust (RVS) can mitigate the risk of unhelpful volatility, speculative excess or investment mistakes. The research outlines the institutional, ontological, dynamic and epistemological issues at play. We highlight the importance of system capabilities, valuation standard salience and stakeholders trust.

Keywords: Valuation, property rights, information, institutions, trust, salience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349
2723 Student Perceptions of Defense Acquisition University Courses: An Explanatory Data Collection Approach

Authors: Melissa C. LaDuke

Abstract:

The overarching purpose of this study was to determine the relationship between the current format of online delivery for Defense Acquisition University (DAU) courses and Air Force Acquisition (AFA) personnel participation. AFA personnel (hereafter named “student”) were particularly of interest, as they have been mandated to take anywhere from 3 to 30 online courses to earn various DAU specialization certifications. Participants in this qualitative case study were AFA personnel who pursued DAU certifications in science and technology management, program/contract management, and other related fields. Air Force personnel were interviewed about their experiences with online courses. The data gathered were analyzed and grouped into 12 major themes. The themes tied into the theoretical framework and addressed either teacher-centered or student-centered educational practices within DAU. Based on the results of the data analysis, various factors contributed to student perceptions of DAU courses to include the online course construct and relevance to their job. The analysis also found students want to learn the information presented but would like to be able to apply the information learned in meaningful ways.

Keywords: Educational theory, computer-based training, interview, student perceptions, online course design, teacher positionality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157
2722 The Impact of Semantic Web on E-Commerce

Authors: Karim Heidari

Abstract:

Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. At the present time, only human beings are able to understand the product information published online. The emerging semantic Web technologies have the potential to deeply influence the further development of the Internet Economy. In this paper we propose a scenario based research approach to predict the effects of these new technologies on electronic markets and business models of traders and intermediaries and customers. Over 300 million searches are conducted everyday on the Internet by people trying to find what they need. A majority of these searches are in the domain of consumer ecommerce, where a web user is looking for something to buy. This represents a huge cost in terms of people hours and an enormous drain of resources. Agent enabled semantic search will have a dramatic impact on the precision of these searches. It will reduce and possibly eliminate information asymmetry where a better informed buyer gets the best value. By impacting this key determinant of market prices semantic web will foster the evolution of different business and economic models. We submit that there is a need for developing these futuristic models based on our current understanding of e-commerce models and nascent semantic web technologies. We believe these business models will encourage mainstream web developers and businesses to join the “semantic web revolution."

Keywords: E-Commerce, E-Business, Semantic Web, XML.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3432
2721 Secure Power Systems Against Malicious Cyber-Physical Data Attacks: Protection and Identification

Authors: Morteza Talebi, Jianan Wang, Zhihua Qu

Abstract:

The security of power systems against malicious cyberphysical data attacks becomes an important issue. The adversary always attempts to manipulate the information structure of the power system and inject malicious data to deviate state variables while evading the existing detection techniques based on residual test. The solutions proposed in the literature are capable of immunizing the power system against false data injection but they might be too costly and physically not practical in the expansive distribution network. To this end, we define an algebraic condition for trustworthy power system to evade malicious data injection. The proposed protection scheme secures the power system by deterministically reconfiguring the information structure and corresponding residual test. More importantly, it does not require any physical effort in either microgrid or network level. The identification scheme of finding meters being attacked is proposed as well. Eventually, a well-known IEEE 30-bus system is adopted to demonstrate the effectiveness of the proposed schemes.

Keywords: Algebraic Criterion, Malicious Cyber-Physical Data Injection, Protection and Identification, Trustworthy Power System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1971
2720 Array Signal Processing: DOA Estimation for Missing Sensors

Authors: Lalita Gupta, R. P. Singh

Abstract:

Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.

Keywords: Array Signal Processing, Beamforming, ULA, Direction of Arrival, MUSIC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2996
2719 Distributed Multi-Agent Based Approach on an Intelligent Transportation Network

Authors: Xiao Yihong, Yu Kexin, Burra Venkata Durga Kumar

Abstract:

With the accelerating process of urbanization, the problem of urban road congestion is becoming more and more serious. Intelligent transportation system combining distributed and artificial intelligence has become a research hotspot. As the core development direction of the intelligent transportation system, Cooperative Intelligent Transportation System (C-ITS) integrates advanced information technology and communication methods and realizes the integration of human, vehicle, roadside infrastructure and other elements through the multi-agent distributed system. By analyzing the system architecture and technical characteristics of C-ITS, the paper proposes a distributed multi-agent C-ITS. The system consists of Roadside Subsystem, Vehicle Subsystem and Personal Subsystem. At the same time, we explore the scalability of the C-ITS and put forward incorporating local rewards in the centralized training decentralized execution paradigm, hoping to add a scalable value decomposition method. In addition, we also suggest introducing blockchain to improve the safety of the traffic information transmission process. The system is expected to improve vehicle capacity and traffic safety.

Keywords: Distributed system, artificial intelligence, multi-agent, Cooperative Intelligent Transportation System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 518
2718 Accurate Time Domain Method for Simulation of Microstructured Electromagnetic and Photonic Structures

Authors: Vijay Janyani, Trevor M. Benson, Ana Vukovic

Abstract:

A time-domain numerical model within the framework of transmission line modeling (TLM) is developed to simulate electromagnetic pulse propagation inside multiple microcavities forming photonic crystal (PhC) structures. The model developed is quite general and is capable of simulating complex electromagnetic problems accurately. The field quantities can be mapped onto a passive electrical circuit equivalent what ensures that TLM is provably stable and conservative at a local level. Furthermore, the circuit representation allows a high level of hybridization of TLM with other techniques and lumped circuit models of components and devices. A photonic crystal structure formed by rods (or blocks) of high-permittivity dieletric material embedded in a low-dielectric background medium is simulated as an example. The model developed gives vital spatio-temporal information about the signal, and also gives spectral information over a wide frequency range in a single run. The model has wide applications in microwave communication systems, optical waveguides and electromagnetic materials simulations.

Keywords: Computational Electromagnetics, Numerical Simulation, Transmission Line Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602
2717 Cloud Computing for E-Learning with More Emphasis on Security Issues

Authors: Sajjad Hashemi, Seyyed Yasser Hashemi

Abstract:

In today's world, success of most systems depend on the use of new technologies and information technology (IT) which aimed to increase efficiency and satisfaction of users. One of the most important systems that use information technology to deliver services is the education system. But for educational services in the form of E-learning systems, hardware and software equipment should be containing high quality, which requires substantial investment. Because the vast majority of educational establishments can not invest in this area so the best way for them is reducing the costs and providing the E-learning services by using cloud computing. But according to the novelty of the cloud technology, it can create challenges and concerns that the most noted among them are security issues. Security concerns about cloud-based E-learning products are critical and security measures essential to protect valuable data of users from security vulnerabilities in products. Thus, the success of these products happened if customers meet security requirements then can overcome security threats. In this paper tried to explore cloud computing and its positive impact on E- learning and put main focus to identify security issues that related to cloud-based E-learning efforts which have been improve security and provide solutions in management challenges.

Keywords: Cloud computing, E-Learning, Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3195
2716 Control Chart Pattern Recognition Using Wavelet Based Neural Networks

Authors: Jun Seok Kim, Cheong-Sool Park, Jun-Geol Baek, Sung-Shick Kim

Abstract:

Control chart pattern recognition is one of the most important tools to identify the process state in statistical process control. The abnormal process state could be classified by the recognition of unnatural patterns that arise from assignable causes. In this study, a wavelet based neural network approach is proposed for the recognition of control chart patterns that have various characteristics. The procedure of proposed control chart pattern recognizer comprises three stages. First, multi-resolution wavelet analysis is used to generate time-shape and time-frequency coefficients that have detail information about the patterns. Second, distance based features are extracted by a bi-directional Kohonen network to make reduced and robust information. Third, a back-propagation network classifier is trained by these features. The accuracy of the proposed method is shown by the performance evaluation with numerical results.

Keywords: Control chart pattern recognition, Multi-resolution wavelet analysis, Bi-directional Kohonen network, Back-propagation network, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456
2715 Leveraging Quality Metrics in Voting Model Based Thread Retrieval

Authors: Atefeh Heydari, Mohammadali Tavakoli, Zuriati Ismail, Naomie Salim

Abstract:

Seeking and sharing knowledge on online forums have made them popular in recent years. Although online forums are valuable sources of information, due to variety of sources of messages, retrieving reliable threads with high quality content is an issue. Majority of the existing information retrieval systems ignore the quality of retrieved documents, particularly, in the field of thread retrieval. In this research, we present an approach that employs various quality features in order to investigate the quality of retrieved threads. Different aspects of content quality, including completeness, comprehensiveness, and politeness, are assessed using these features, which lead to finding not only textual, but also conceptual relevant threads for a user query within a forum. To analyse the influence of the features, we used an adopted version of voting model thread search as a retrieval system. We equipped it with each feature solely and also various combinations of features in turn during multiple runs. The results show that incorporating the quality features enhances the effectiveness of the utilised retrieval system significantly.

Keywords: Content quality, Forum search, Thread retrieval, Voting techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733