Search results for: multilevel semantic information
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11015

Search results for: multilevel semantic information

9845 A Comparative Study between Different Techniques of Off-Page and On-Page Search Engine Optimization

Authors: Ahmed Ishtiaq, Maeeda Khalid, Umair Sajjad

Abstract:

In the fast-moving world, information is the key to success. If information is easily available, then it makes work easy. The Internet is the biggest collection and source of information nowadays, and with every single day, the data on internet increases, and it becomes difficult to find required data. Everyone wants to make his/her website at the top of search results. This can be possible when you have applied some techniques of SEO inside your application or outside your application, which are two types of SEO, onsite and offsite SEO. SEO is an abbreviation of Search Engine Optimization, and it is a set of techniques, methods to increase users of a website on World Wide Web or to rank up your website in search engine indexing. In this paper, we have compared different techniques of Onpage and Offpage SEO, and we have suggested many things that should be changed inside webpage, outside web page and mentioned some most powerful and search engine considerable elements and techniques in both types of SEO in order to gain high ranking on Search Engine.

Keywords: auto-suggestion, search engine optimization, SEO, query, web mining, web crawler

Procedia PDF Downloads 130
9844 Factor Analysis Based on Semantic Differential of the Public Perception of Public Art: A Case Study of the Malaysia National Monument

Authors: Yuhanis Ibrahim, Sung-Pil Lee

Abstract:

This study attempts to address factors that contribute to outline public art factors assessment, memorial monument specifically. Memorial monuments hold significant and rich message whether the intention of the art is to mark and commemorate important event or to inform younger generation about the past. Public monument should relate to the public and raise awareness about the significant issue. Therefore, by investigating the impact of the existing public memorial art will hopefully shed some lights to the upcoming public art projects’ stakeholders to ensure the lucid memorial message is delivered to the public directly. Public is the main actor as public is the fundamental purpose that the art was created. Perception is framed as one of the reliable evaluation tools to assess the public art impact factors. The Malaysia National Monument was selected to be the case study for the investigation. The public’s perceptions were gathered using a questionnaire that involved (n-115) participants to attain keywords, and next Semantical Differential Methodology (SDM) was adopted to evaluate the perceptions about the memorial monument. These perceptions were then measured with Reliability Factor and then were factorised using Factor Analysis of Principal Component Analysis (PCA) method to acquire concise factors for the monument assessment. The result revealed that there are four factors that influence public’s perception on the monument which are aesthetic, audience, topology, and public reception. The study concludes by proposing the factors for public memorial art assessment for the next future public memorial projects especially in Malaysia.

Keywords: factor analysis, public art, public perception, semantical differential methodology

Procedia PDF Downloads 486
9843 User Selections on Social Network Applications

Authors: C. C. Liang

Abstract:

MSN used to be the most popular application for communicating among social networks, but Facebook chat is now the most popular. Facebook and MSN have similar characteristics, including usefulness, ease-of-use, and a similar function, which is the exchanging of information with friends. Facebook outperforms MSN in both of these areas. However, the adoption of Facebook and abandonment of MSN have occurred for other reasons. Functions can be improved, but users’ willingness to use does not just depend on functionality. Flow status has been established to be crucial to users’ adoption of cyber applications and to affects users’ adoption of software applications. If users experience flow in using software application, they will enjoy using it frequently, and even change their preferred application from an old to this new one. However, no investigation has examined choice behavior related to switching from Facebook to MSN based on a consideration of flow experiences and functions. This investigation discusses the flow experiences and functions of social-networking applications. Flow experience is found to affect perceived ease of use and perceived usefulness; perceived ease of use influences information ex-change with friends, and perceived usefulness; information exchange influences perceived usefulness, but information exchange has no effect on flow experience.

Keywords: consumer behavior, social media, technology acceptance model, flow experience

Procedia PDF Downloads 339
9842 A Method for Multimedia User Interface Design for Mobile Learning

Authors: Shimaa Nagro, Russell Campion

Abstract:

Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.

Keywords: human-computer interaction, interface design, mobile learning, education

Procedia PDF Downloads 227
9841 A Framework for Secure Information Flow Analysis in Web Applications

Authors: Ralph Adaimy, Wassim El-Hajj, Ghassen Ben Brahim, Hazem Hajj, Haidar Safa

Abstract:

Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.

Keywords: web applications security, secure information flow, program dependence graph, database annotation

Procedia PDF Downloads 455
9840 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.

Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa

Procedia PDF Downloads 148
9839 SMEs Access to Finance in Croatia – Model Approach

Authors: Vinko Vidučić, Ljiljana Vidučić, Damir Boras

Abstract:

The goals of the research include the determination of the characteristics of SMEs finance in Croatia, as well as the determination of indirect growth rates of the information model of the entrepreneurs` perception of business environment. The research results show that cost of finance and access to finance are most important constraining factor in setting up and running the business of small entrepreneurs in Croatia. Furthermore, small entrepreneurs in Croatia are significantly dissatisfied with the administrative barriers although relatively to a lesser extent than was the case in the pre-crisis time. High collateral requirement represents the main characteristic of bank lending concerning SMEs followed by long credit elaboration process. Formulated information model has defined the individual impact of indirect growth rates of the remaining variables on the model’s specific variable.

Keywords: business environment, information model, indirect growth rates, SME finance

Procedia PDF Downloads 348
9838 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 240
9837 Integrated Model for Enhancing Data Security Performance in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 458
9836 Text as Reader Device Improving Subjectivity on the Role of Attestation between Interpretative Semiotics and Discursive Linguistics

Authors: Marco Castagna

Abstract:

Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.

Keywords: attestation, meaning, reader, text

Procedia PDF Downloads 227
9835 Perceiving Casual Speech: A Gating Experiment with French Listeners of L2 English

Authors: Naouel Zoghlami

Abstract:

Spoken-word recognition involves the simultaneous activation of potential word candidates which compete with each other for final correct recognition. In continuous speech, the activation-competition process gets more complicated due to speech reductions existing at word boundaries. Lexical processing is more difficult in L2 than in L1 because L2 listeners often lack phonetic, lexico-semantic, syntactic, and prosodic knowledge in the target language. In this study, we investigate the on-line lexical segmentation hypotheses that French listeners of L2 English form and then revise as subsequent perceptual evidence is revealed. Our purpose is to shed further light on the processes of L2 spoken-word recognition in context and better understand L2 listening difficulties through a comparison of skilled and unskilled reactions at the point where their working hypothesis is rejected. We use a variant of the gating experiment in which subjects transcribe an English sentence presented in increments of progressively greater duration. The spoken sentence was “And this amazing athlete has just broken another world record”, chosen mainly because it included common reductions and phonetic features in English, such as elision and assimilation. Our preliminary results show that there is an important difference in the manner in which proficient and less-proficient L2 listeners handle connected speech. Less-proficient listeners delay recognition of words as they wait for lexical and syntactic evidence to appear in the gates. Further statistical results are currently being undertaken.

Keywords: gating paradigm, spoken word recognition, online lexical segmentation, L2 listening

Procedia PDF Downloads 451
9834 Label Survey in Romania: A Study on How Consumers Use Food Labeling

Authors: Gabriela Iordachescu, Mariana Cretu Stuparu, Mirela Praisler, Camelia Busila, Doina Voinescu, Camelia Vizireanu

Abstract:

The aim of the study was to evaluate the consumers’ degree of confidence in food labeling, how they use and understand the label and respectively food labeling elements. The label is a bridge between producers, suppliers, and consumers. It has to offer enough information in terms of public health and food safety, statement of ingredients, nutritional information, warnings and advisory statements, producing date and shelf-life, instructions for storage and preparation (if required). The survey was conducted on 500 consumers group in Romania, aged 15+, males and females, from urban and rural areas and with different graduation levels. The questionnaire was distributed face to face and online. It had single or multiple choices questions and label images for the efficiency and best understanding of the question. The law 1169/2011 applied to food products from 13 of December 2016 improved and adapted the requirements for labeling in a clear manner. The questions were divided on following topics: interest and general trust in labeling, use and understanding of label elements, understanding of the ingredient list and safety information, nutrition information, advisory statements, serving sizes, best before/use by meanings, intelligent labeling, and demographic data. Three choice selection exercises were also included. In this case, the consumers had to choose between two similar products and evaluate which label element is most important in product choice. The data were analysed using MINITAB 17 and PCA analysis. Most of the respondents trust the food label, taking into account some elements especially when they buy the first time the product. They usually check the sugar content and type of sugar, saturated fat and use the mandatory label elements and nutrition information panel. Also, the consumers pay attention to advisory statements, especially if one of the items is relevant to them or the family. Intelligent labeling is a challenging option. In addition, the paper underlines that the consumer is more careful and selective with the food consumption and the label is the main helper for these.

Keywords: consumers, food safety information, labeling, labeling nutritional information

Procedia PDF Downloads 196
9833 Effective Governance through Mobile Phones: Cases Supporting the Introduction and Implementation

Authors: Mohd Mudasir Shafi, Zafrul Hasan, Talat Saleem

Abstract:

Information and communication Technology (ICT) services have been defined as a route to good governance. Introduction of ICT into Governance has given rise to the idea of e-governance which helps in enhancing transparency, generating accountability and responsiveness in the system in order to provide faster and quality service to the citizen. Advancement in ICT has provided governments all over the world to speed up the delivery of information and services to citizens and businesses and increase their participation in governance. There has been varying degree of success over the past decade into providing services to the citizens using internet and different web services. These e-government initiatives have been extensively researched. Our research is aimed at the transition from electronic government to mobile government (m-government) initiatives implementing the mobile services and concerned to understand the major factors which will aid to adoption and distribution of these services. There must be some amount of research done in the integration process between e-government and m-government. There must also be enough amount of investigation done all the factors that could affect the transition process. Such factors differ between different places and the advancement in information and technology available there. In this paper, we have discussed why mobile communication system can be used for effective e-governance and the areas where m-governance can be implemented. The paper will examine some of the reasons as well as the main opportunities for improving effective governance through mobile phones.

Keywords: e-governance, mobile phones, information technology, m-government

Procedia PDF Downloads 430
9832 Applications Using Geographic Information System for Planning and Development of Energy Efficient and Sustainable Living for Smart-Cities

Authors: Javed Mohammed

Abstract:

As urbanization process has been and will be happening in an unprecedented scale worldwide, strong requirements from academic research and practical fields for smart management and intelligent planning of cities are pressing to handle increasing demands of infrastructure and potential risks of inhabitants agglomeration in disaster management. Geo-spatial data and Geographic Information System (GIS) are essential components for building smart cities in a basic way that maps the physical world into virtual environment as a referencing framework. On higher level, GIS has been becoming very important in smart cities on different sectors. In the digital city era, digital maps and geospatial databases have long been integrated in workflows in land management, urban planning and transportation in government. People have anticipated GIS to be more powerful not only as an archival and data management tool but also as spatial models for supporting decision-making in intelligent cities. The purpose of this project is to offer observations and analysis based on a detailed discussion of Geographic Information Systems( GIS) driven Framework towards the development of Smart and Sustainable Cities through high penetration of Renewable Energy Technologies.

Keywords: digital maps, geo-spatial, geographic information system, smart cities, renewable energy, urban planning

Procedia PDF Downloads 514
9831 Bridging the Gaping Levels of Information Entree for Visually Impaired Students in the Sri Lankan University Libraries

Authors: Wilfred Jeyatheese Jeyaraj

Abstract:

Education is a key determinant of future success, and every person deserves non-discriminant access to information for educational inevitabilities in any case. Analysing and understanding complex information is a crucial learning tool, especially for students. In order to compete equally with sighted students, visually impaired students require the unhinged access to access to all the available information resources. When the education of visually impaired students comes to a focal point, it can be stated that visually impaired students encounter several obstacles and barriers before they enter the university and during their time there as students. These obstacles and barriers are spread across technical, organizational and social arenas. This study reveals the possible approaches to absorb and benefit from the information provided by the Sri Lankan University Libraries for visually impaired students. Purposive sampling technique was used to select sample visually impaired students attached to the Sri Lankan National universities. There are 07 National universities which accommodate the visually impaired students and with the identified data, they were selected for this study and 80 visually impaired students were selected as the sample group. Descriptive type survey method was used to collect data. Structured questionnaires, interviews and direct observation were used as research instruments. As far as the Sri Lankan context spread is concerned, visually impaired students are able to finish their courses through their own determination to overcome the barriers they encounter on their way to graduation, through moral and practical support from their own friends and very often through a high level of creativity. According to the findings there are no specially trained university librarians to serve visually impaired users and less number of assistive technology equipment are available at present. This paper enables all university libraries in Sri Lanka to be informed about the social isolation of visually compromised students at the Sri Lankan universities and focuses on the rectification issues by considering their distinct case for interaction.

Keywords: information access, Sri Lanka, university libraries, visual impairment

Procedia PDF Downloads 220
9830 Potential of Visualization and Information Modeling on Productivity Improvement and Cost Saving: A Case Study of a Multi-Residential Construction Project

Authors: Sara Rankohi, Lloyd Waugh

Abstract:

Construction sites are information saturated. Digitalization is hitting construction sites to meet the incredible demand of knowledge sharing and information documentations. From flying drones, 3D Lasers scanners, pocket mobile applications, to augmented reality glasses and smart helmet, visualization technologies help real-time information imposed straight onto construction professional’s field of vision. Although these technologies are very applicable and can have the direct impact on project cost and productivity, experience shows that only a minority of construction professionals quickly adapt themselves to benefit from them in practice. The majority of construction managers still tend to apply traditional construction management methods. This paper investigates a) current applications of visualization technologies in construction projects management, b) the direct effect of these technologies on productivity improvement and cost saving of a multi-residential building project via a case study on Mac Taggart Senior Care project located in Edmonton, Alberta. The research shows the imaged based technologies have a direct impact on improving project productivity and cost savings.

Keywords: image-based technologies, project management, cost, productivity improvement

Procedia PDF Downloads 340
9829 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction

Authors: Jingjie Li, Wenjie Hu

Abstract:

Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.

Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure

Procedia PDF Downloads 150
9828 The Secrecy Capacity of the Semi-Deterministic Wiretap Channel with Three State Information

Authors: Mustafa El-Halabi

Abstract:

A general model of wiretap channel with states is considered, where the legitimate receiver and the wiretapper’s observations depend on three states S1, S2 and S3. State S1 is non-causally known to the encoder, S2 is known to the receiver, and S3 remains unknown. A secure coding scheme, based using structured-binning, is proposed, and it is shown to achieve the secrecy capacity when the signal at legitimate receiver is a deterministic function of the input.

Keywords: physical layer security, interference, side information, secrecy capacity

Procedia PDF Downloads 369
9827 Interaction Design In Home Appliance: An Integrated Approach InKanseiAnd Hedonomic “Cases: Rice Cooker, Juicer, Mixer”

Authors: Sara Mostowfi, Hassan Sadeghinaeini, Sana Behnamasl, Leila Ensaniat, Maryam Mostafaee

Abstract:

Nowadays, most of product producers, e.g. home appliance, electronic machines and vehicles focus on quality and comfort, and promise consumers ease of use and pleasurable experiences during product using. Consumers make their purchase decisions according to two needs: functional and emotional needs. Functional needs are fulfilled by product functionality, besides emotional needs are related to psychologists’ aspects of production. Emotions are distinctive elements which should be added to products and services to lead them up. In this case, the authors’ survey conducted pleasurable and hedonomic aspects in products of a home appliance company in Iran. In this regard, three samples of home appliance were selected: mixer, rice cooker, iron. Fifteen women (20-60) participated in this study. Every user evaluated each product by questionnaire based on 7 point semantic differential scale. After analyzing the results with statistical methods, results showed that 90% of users aren’t satisfied with hedonic and pleasurable criteria in interaction with these products. They notified that regarding hedonomics and pleasurable criteria’s they will have better ease of use and functionality. Our findings show a significant association between products’ features and user satisfaction. It seems that industrial design has a significant impression on the company’s products and with regard the pleasurable criteria the company sales will be more successful.

Keywords: home appliance, interaction, pleasure, hedonomy, ergonomy

Procedia PDF Downloads 364
9826 Problems in Lifelong Education Course in Information and Communication Technology

Authors: Hisham Md.Suhadi, Faaizah Shahbodin, Jamaluddin Hashim, Nurul Huda Mahsudi, Mahathir Mohd Sarjan

Abstract:

The study is the way to identify the problems that occur in organizing short courses lifelong learning in the information and communication technology (ICT) education which are faced by the lecturer and staff at the Mara Skill Institute and Industrial Training Institute in Pahang, Malaysia. The important aspects of these issues are classified to five which are selecting the courses administrative. Fifty lecturers and staff were selected as a respondent. The sample is selected by using the non-random sampling method purpose sampling. The questionnaire is used as a research instrument and divided into five main parts. All the data that gain from the questionnaire are analyzed by using the SPSS in term of mean, standard deviation and percentage. The findings showed that there are the problems occur in organizing the short course for lifelong learning in ICT education.

Keywords: lifelong Education, information and communication technology, short course, ICT education, courses administrative

Procedia PDF Downloads 433
9825 Adoption of International Financial Reporting Standards and Earnings Quality in Listed Deposit Money Banks in Nigeria

Authors: Shehu Usman Hassan

Abstract:

Published accounting information in financial statements are required to provide various users - shareholders, employees, suppliers, creditors, financial analysts, stockbrokers and government agencies – with timely and reliable information useful for making prudent, effective and efficient decisions. The widespread failure in the financial information quality has created the need to improve the financial information quality and to strengthen the control of managers by setting up good firms structures. This paper investigates firm attributes from perspective of structure, monitoring, performance elements of listed deposit money banks in Nigeria. The study adopted correlational research design with balanced panel data of 14 banks as sample of the study using multiple regression as a tool of analysis. The result reveals that firms attributes (leverage, profitability, liquidity, bank size and bank growth) has as significant influence on earnings quality of listed deposit money banks in Nigeria after the adoption of IFRS, while the pre period shows that the selected firm attributes has no significant impact on earnings quality. It is therefore concluded that the adoption of IFRS is right and timely.

Keywords: earnings quality, firm attributes, listed deposit money bank, Nigeria

Procedia PDF Downloads 494
9824 Cognitive and Behavioral Disorders in Patients with Precuneal Infarcts

Authors: F. Ece Cetin, H. Nezih Ozdemir, Emre Kumral

Abstract:

Ischemic stroke of the precuneal cortex (PC) alone is extremely rare. This study aims to evaluate the clinical, neurocognitive, and behavioural characteristics of isolated PC infarcts. We assessed neuropsychological and behavioral findings in 12 patients with isolated PC infarct among 3800 patients with ischemic stroke. To determine the most frequently affected brain locus in patients, we first overlapped the ischemic area of patients with specific cognitive disorders and patients without specific cognitive disorders. Secondly, we compared both overlap maps using the 'subtraction plot' function of MRIcroGL. Patients showed various types of cognitive disorders. All patients experienced more than one category of cognitive disorder, except for two patients with only one cognitive disorder. Lesion topographical analysis showed that damage within the anterior precuneal region might lead to consciousness disorders (25%), self-processing impairment (42%), visuospatial disorders (58%), and lesions in the posterior precuneal region caused episodic and semantic memory impairment (33%). The whole precuneus is involved in at least one body awareness disorder. The cause of the stroke was cardioembolism in 5 patients (42%), large artery disease in 3 (25%), and unknown in 4 (33%). This study showed a wide variety of neuropsychological and behavioural disorders in patients with precuneal infarct. Future studies are needed to achieve a proper definition of the function of the precuneus in relation to the extended cortical areas. Precuneal cortex region infarcts have been found to predict a source of embolism from the large arteries or heart.

Keywords: cognition, pericallosal artery, precuneal cortex, ischemic stroke

Procedia PDF Downloads 116
9823 Tool for Metadata Extraction and Content Packaging as Endorsed in OAIS Framework

Authors: Payal Abichandani, Rishi Prakash, Paras Nath Barwal, B. K. Murthy

Abstract:

Information generated from various computerization processes is a potential rich source of knowledge for its designated community. To pass this information from generation to generation without modifying the meaning is a challenging activity. To preserve and archive the data for future generations it’s very essential to prove the authenticity of the data. It can be achieved by extracting the metadata from the data which can prove the authenticity and create trust on the archived data. Subsequent challenge is the technology obsolescence. Metadata extraction and standardization can be effectively used to resolve and tackle this problem. Metadata can be categorized at two levels i.e. Technical and Domain level broadly. Technical metadata will provide the information that can be used to understand and interpret the data record, but only this level of metadata isn’t sufficient to create trustworthiness. We have developed a tool which will extract and standardize the technical as well as domain level metadata. This paper is about the different features of the tool and how we have developed this.

Keywords: digital preservation, metadata, OAIS, PDI, XML

Procedia PDF Downloads 379
9822 The Development of Electronic Health Record Adoption in Indonesian Hospitals: 2008-2015

Authors: Adistya Maulidya, Mujuna Abbas, Nur Assyifa, Putri Dewi Gutiyani

Abstract:

Countries are moving forward to develop databases from electronic health records for monitoring and research. Since the issuance of Information and Electonic Transaction Constitution No. 11 of 2008 as well as Minister Regulation No. 269 of 2008, there has been a gradual progress of Indonesian hospitals adopting Electonic Health Record (EHR) in its systems. This paper is the result of a literature study about the progress that has been made in Indonesia to develop national health information infrastructure through EHR within the hospitals. The purpose of this study was to describe trends in adoption of EHR systems among hospitals in Indonesia from 2008 to 2015 as well as to assess the preparedness of Indonesian national health information infrastructure facing ASEAN Economic Community.

Keywords: adoption, Indonesian hospitals, electronic health record, ASEAN economic community

Procedia PDF Downloads 273
9821 Information Technology and the Challenges Facing the Legal Profession in Nigeria

Authors: Odoh Ben Uruchi

Abstract:

Information Technology is an outcome of the nexus between the computer technology and the communication technology which has grown as silver fiber in Nigeria. Information Technology represents the fourth generation of human communication after sight, oral and written communications. The internet, as with all path-breaking technological developments gives us all the ample privileges to act as a global community; advertise and operate across all frontiers; over boarders and beyond the control of any government. The security concerns, computer abuse and the side effects of this technology have moved to the forefront of the consciousness of law enforcement agencies. Unfortunately, Nigeria is one of the very few countries in the world to have not legislated Cyber Laws, although several unsuccessful attempts have been made in recent times at providing the legal framework for regulating the activities in Nigerian cyberspace. Traditional legal systems have led to great difficulty in keeping pace with the rapid growth of the internet and its impact throughout Nigeria. The only existing legal frameworks are constantly being challenged by technological advancement. This has created a need to constantly update and adapt the way in which we organize ourselves as Legal Practitioners in order to maintain overall control of its domestic and national interests. This paper seeks to appraise the challenges facing the legal profession in Nigeria because of want of Cyber Laws. In doing this, the paper shall highlight the loopholes in the existing laws and recommends the way forward.

Keywords: information technology, challenges, legal profession, Nigeria

Procedia PDF Downloads 505
9820 Symmetric Key Encryption Algorithm Using Indian Traditional Musical Scale for Information Security

Authors: Aishwarya Talapuru, Sri Silpa Padmanabhuni, B. Jyoshna

Abstract:

Cryptography helps in preventing threats to information security by providing various algorithms. This study introduces a new symmetric key encryption algorithm for information security which is linked with the "raagas" which means Indian traditional scale and pattern of music notes. This algorithm takes the plain text as input and starts its encryption process. The algorithm then randomly selects a raaga from the list of raagas that is assumed to be present with both sender and the receiver. The plain text is associated with the thus selected raaga and an intermediate cipher-text is formed as the algorithm converts the plain text characters into other characters, depending upon the rules of the algorithm. This intermediate code or cipher text is arranged in various patterns in three different rounds of encryption performed. The total number of rounds in the algorithm is equal to the multiples of 3. To be more specific, the outcome or output of the sequence of first three rounds is again passed as the input to this sequence of rounds recursively, till the total number of rounds of encryption is performed. The raaga selected by the algorithm and the number of rounds performed will be specified at an arbitrary location in the key, in addition to important information regarding the rounds of encryption, embedded in the key which is known by the sender and interpreted only by the receiver, thereby making the algorithm hack proof. The key can be constructed of any number of bits without any restriction to the size. A software application is also developed to demonstrate this process of encryption, which dynamically takes the plain text as input and readily generates the cipher text as output. Therefore, this algorithm stands as one of the strongest tools for information security.

Keywords: cipher text, cryptography, plaintext, raaga

Procedia PDF Downloads 272
9819 Continuous FAQ Updating for Service Incident Ticket Resolution

Authors: Kohtaroh Miyamoto

Abstract:

As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use an FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such an FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months, we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.

Keywords: FAQ system, resolution time, service incident tickets, IT system maintenance

Procedia PDF Downloads 320
9818 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: mobile computing, deep learning apps, sensitive information, static analysis

Procedia PDF Downloads 146
9817 Integrating Building Information Modeling into Facilities Management Operations

Authors: Mojtaba Valinejadshoubi, Azin Shakibabarough, Ashutosh Bagchi

Abstract:

Facilities such as residential buildings, office buildings, and hospitals house large density of occupants. Therefore, a low-cost facility management program (FMP) should be used to provide a satisfactory built environment for these occupants. Facility management (FM) has been recently used in building projects as a critical task. It has been effective in reducing operation and maintenance cost of these facilities. Issues of information integration and visualization capabilities are critical for reducing the complexity and cost of FM. Building information modeling (BIM) can be used as a strong visual modeling tool and database in FM. The main objective of this study is to examine the applicability of BIM in the FM process during a building’s operational phase. For this purpose, a seven-storey office building is modeled Autodesk Revit software. Authors integrated the cloud-based environment using a visual programming tool, Dynamo, for the purpose of having a real-time cloud-based communication between the facility managers and the participants involved in the project. An appropriate and effective integrated data source and visual model such as BIM can reduce a building’s operational and maintenance costs by managing the building life cycle properly.

Keywords: building information modeling, facility management, operational phase, building life cycle

Procedia PDF Downloads 139
9816 Scattered Places in Stories Singularity and Pattern in Geographic Information

Authors: I. Pina, M. Painho

Abstract:

Increased knowledge about the nature of place and the conditions under which space becomes place is a key factor for better urban planning and place-making. Although there is a broad consensus on the relevance of this knowledge, difficulties remain in relating the theoretical framework about place and urban management. Issues related to representation of places are among the greatest obstacles to overcome this gap. With this critical discussion, based on literature review, we intended to explore, in a common framework for geographical analysis, the potential of stories to spell out place meanings, bringing together qualitative text analysis and text mining in order to capture and represent the singularity contained in each person's life history, and the patterns of social processes that shape places. The development of this reasoning is based on the extensive geographical thought about place, and in the theoretical advances in the field of Geographic Information Science (GISc).

Keywords: discourse analysis, geographic information science place, place-making, stories

Procedia PDF Downloads 177