Search results for: semantic computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1453

Search results for: semantic computing

1153 Relative Clause Attachment Ambiguity Resolution in L2: the Role of Semantics

Authors: Hamideh Marefat, Eskandar Samadi

Abstract:

This study examined the effect of semantics on processing ambiguous sentences containing Relative Clauses (RCs) preceded by a complex Determiner Phrase (DP) by Persian-speaking learners of L2 English with different proficiency and Working Memory Capacities (WMCs). The semantic relationship studied was one between the subject of the main clause and one of the DPs in the complex DP to see if, as predicted by Spreading Activation Model, priming one of the DPs through this semantic manipulation affects the L2ers’ preference. The results of a task using Rapid Serial Visual Processing (time-controlled paradigm) showed that manipulation of the relationship between the subject of the main clause and one of the DPs in the complex DP preceding RC has no effect on the choice of the antecedent; rather, the L2ers' processing is guided by the phrase structure information. Moreover, while proficiency did not have any effect on the participants’ preferences, WMC brought about a difference in their preferences, with a DP1 preference by those with a low WMC. This finding supports the chunking hypothesis and the predicate proximity principle, which is the strategy also used by monolingual Persian speakers.

Keywords: semantics, relative clause processing, ambiguity resolution, proficiency, working memory capacity

Procedia PDF Downloads 597
1152 Keypoint Detection Method Based on Multi-Scale Feature Fusion of Attention Mechanism

Authors: Xiaoxiao Li, Shuangcheng Jia, Qian Li

Abstract:

Keypoint detection has always been a challenge in the field of image recognition. This paper proposes a novelty keypoint detection method which is called Multi-Scale Feature Fusion Convolutional Network with Attention (MFFCNA). We verified that the multi-scale features with the attention mechanism module have better feature expression capability. The feature fusion between different scales makes the information that the network model can express more abundant, and the network is easier to converge. On our self-made street sign corner dataset, we validate the MFFCNA model with an accuracy of 97.8% and a recall of 81%, which are 5 and 8 percentage points higher than the HRNet network, respectively. On the COCO dataset, the AP is 71.9%, and the AR is 75.3%, which are 3 points and 2 points higher than HRNet, respectively. Extensive experiments show that our method has a remarkable improvement in the keypoint recognition tasks, and the recognition effect is better than the existing methods. Moreover, our method can be applied not only to keypoint detection but also to image classification and semantic segmentation with good generality.

Keywords: keypoint detection, feature fusion, attention, semantic segmentation

Procedia PDF Downloads 95
1151 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification

Authors: Zhaoxin Luo, Michael Zhu

Abstract:

In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.

Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese

Procedia PDF Downloads 38
1150 Semantic Based Analysis in Complaint Management System with Analytics

Authors: Francis Alterado, Jennifer Enriquez

Abstract:

Semantic Based Analysis in Complaint Management System with Analytics is an enhanced tool of providing complaints by the clients as well as a mechanism for Palawan Polytechnic College to gather, process, and monitor status of these complaints. The study has a mobile application that serves as a remote facility of communication between the students and the school management on the issues encountered by the student and the solution of every complaint received. In processing the complaints, text mining and clustering algorithms were utilized. Every module of the systems was tested and based on the results; these are 100% free from error before integration was done. A system testing was also done by checking the expected functionality of the system which was 100% functional. The system was tested by 10 students by forwarding complaints to 10 departments. Based on results, the students were able to submit complaints, the system was able to process accordingly by identifying to which department the complaints are intended, and the concerned department was able to give feedback on the complaint received to the student. With this, the system gained 4.7 rating which means Excellent.

Keywords: technology adoption, emerging technology, issues challenges, algorithm, text mining, mobile technology

Procedia PDF Downloads 173
1149 Indium-Gallium-Zinc Oxide Photosynaptic Device with Alkylated Graphene Oxide for Optoelectronic Spike Processing

Authors: Seyong Oh, Jin-Hong Park

Abstract:

Recently, neuromorphic computing based on brain-inspired artificial neural networks (ANNs) has attracted huge amount of research interests due to the technological abilities to facilitate massively parallel, low-energy consuming, and event-driven computing. In particular, research on artificial synapse that imitate biological synapses responsible for human information processing and memory is in the spotlight. Here, we demonstrate a photosynaptic device, wherein a synaptic weight is governed by a mixed spike consisting of voltage and light spikes. Compared to the device operated only by the voltage spike, ∆G in the proposed photosynaptic device significantly increased from -2.32nS to 5.95nS with no degradation of nonlinearity (NL) (potentiation/depression values were changed from 4.24/8 to 5/8). Furthermore, the Modified National Institute of Standards and Technology (MNIST) digit pattern recognition rates improved from 36% and 49% to 50% and 62% in ANNs consisting of the synaptic devices with 20 and 100 weight states, respectively. We expect that the photosynaptic device technology processed by optoelectronic spike will play an important role in implementing the neuromorphic computing systems in the future.

Keywords: optoelectronic synapse, IGZO (Indium-Gallium-Zinc Oxide) photosynaptic device, optoelectronic spiking process, neuromorphic computing

Procedia PDF Downloads 152
1148 Selecting Skyline Mash-Ups under Uncertainty

Authors: Aymen Gammoudi, Hamza Labbaci, Nizar Messai, Yacine Sam

Abstract:

Web Service Composition (Mash-up) has been considered as a new approach used to offer the user a set of Web Services responding to his request. These approaches can return a set of similar Mash-ups in a given context that makes users unable to select the perfect one. Recent approaches focus on computing the skyline over a set of Quality of Service (QoS) attributes. However, these approaches are not sufficient in a dynamic web service environment where the delivered QoS by a Web service is inherently uncertain. In this paper, we treat the problem of computing the skyline over a set of similar Mash-ups under certain dimension values. We generate dimensions for each Mash-up using aggregation operations applied to the QoS attributes. We then tackle the problem of computing the skyline under uncertain dimensions. We present each dimension value of mash-up using a frame of discernment and introduce the d-dominance using the Evidence Theory. Finally, we propose our experimental results that show both the effectiveness of the introduced skyline extensions and the efficiency of the proposed approaches.

Keywords: web services, uncertain QoS, mash-ups, uncertain dimensions, skyline, evidence theory, d-dominance

Procedia PDF Downloads 201
1147 Story of Per-: The Radial Network of One Lithuanian Prefix

Authors: Samanta Kietytė

Abstract:

The object of this study is the verbal derivatives stemming from the Lithuanian prefix per-. The prefix under examination can be classified as prepositional, having descended from the preposition per, thereby sharing the same prototypical meaning – denoting movement OVER. These frequently co-occur within sentences (1). The aim of this paper is to conduct a semantic analysis of the prefix per- and to propose a possible radial network of its meanings. In essence, the aim is to identify the interrelationships existing between its meanings. 1) Jis peršoko per tvorą/ 3SG.NOM.M jump.PST.3 over fence.ACC.SG. /ʻHe jumped over the fenceʼ. The foundation of this work lies in the methodological and theoretical framework of cognitive linguistics. The prototypical meaning of prefixes consistently embodies spatial dimensions that can be described through image schemas. This entails the identification of the trajectory, the landmark, and the relation between them in the situation described by the prefixed verb. The meanings of linguistic units are not perceived as arbitrary, but rather, they are interconnected through semantic motivation. According to this perspective, a singular meaning within linguistic units is considered as prototypical, while additional meanings are descended (not necessarily directly) from it. For example, one of the per- meanings TRANSFER (2) is derived from the prototypical meaning OVER. 2) Prašau persiųsti vadovo laišką man./ Ask.PRS.1 forward.INF manager.GEN.SG email.ACC.SG 1.SG.DAT/ ʻPlease forward the manager‘s email to meʼ. Certain semantic relations are explained by the conceptual metaphor and metonymy theory. For instances, when prefixed verb has a meaning WIN (3) it is related to the prototypical meaning. In this case, the prefixed verb describes situations of winning in various ways. In the prototypical meaning, the trajector moves higher than the landmark, and winning is metaphorically perceived as being higher. 3) Sūnus peraugo tėvą./ Son.NOM.SG outgrow.PST.3 father.ACC.SG/ ʻThe son has outgrown the fatherʼ. The data utilized for this study was collected from the 2014 grammatically annotated text "Lithuanian Web (LithuanianWaC v2)", consisting of 63,645,700 words. Given that the corpus is grammatically lemmatized, the list of the 793 items was obtained using the wordlist function and specifying that verbs starting with per were searched. The list included not only prefixed verbs but also other verbs whose roots have the same letter sequences as prefixes. Also, words with misspellings, without diacritical marks, and words listed for lemmatization errors were rejected, and a total of 475 derivatives were left for further analysis. The semantic analysis revealed that there are 12 distinct meanings of the prefix per-. The spatial meanings were extracted by determining what a trajector is, what a landmark is, and what the relation between them is. The connection between non-spatial meanings and spatial ones occurs through semantic motivation established by identifying elements that correspond to the trajector and landmark. The analysis reveals that there are no strict boundaries among these meanings, instead showing a continuum that encompasses a central core and a peripheral association with their internal structure, i.e., some derivatives are more prototypical of a particular meaning than others.

Keywords: word-formation, cognitive semantics, metaphor, radial networks, prototype theory, prefix

Procedia PDF Downloads 32
1146 DMBR-Net: Deep Multiple-Resolution Bilateral Networks for Real-Time and Accurate Semantic Segmentation

Authors: Pengfei Meng, Shuangcheng Jia, Qian Li

Abstract:

We proposed a real-time high-precision semantic segmentation network based on a multi-resolution feature fusion module, the auxiliary feature extracting module, upsampling module, and atrous spatial pyramid pooling (ASPP) module. We designed a feature fusion structure, which is integrated with sufficient features of different resolutions. We also studied the effect of side-branch structure on the network and made discoveries. Based on the discoveries about the side-branch of the network structure, we used a side-branch auxiliary feature extraction layer in the network to improve the effectiveness of the network. We also designed upsampling module, which has better results than the original upsampling module. In addition, we also re-considered the locations and number of atrous spatial pyramid pooling (ASPP) modules and modified the network structure according to the experimental results to further improve the effectiveness of the network. The network presented in this paper takes the backbone network of Bisenetv2 as a basic network, based on which we constructed a network structure on which we made improvements. We named this network deep multiple-resolution bilateral networks for real-time, referred to as DMBR-Net. After experimental testing, our proposed DMBR-Net network achieved 81.2% mIoU at 119FPS on the Cityscapes validation dataset, 80.7% mIoU at 109FPS on the CamVid test dataset, 29.9% mIoU at 78FPS on the COCOStuff test dataset. Compared with all lightweight real-time semantic segmentation networks, our network achieves the highest accuracy at an appropriate speed.

Keywords: multi-resolution feature fusion, atrous convolutional, bilateral networks, pyramid pooling

Procedia PDF Downloads 111
1145 Effects of Bilateral Electroconvulsive Therapy on Autobiographical Memories in Asian Patients

Authors: Lai Gwen Chan, Yining Ong, Audrey Yoke Poh Wong

Abstract:

Background. The efficacy of electroconvulsive therapy (ECT) as a form of treatment to a range of mental disorders is well-established. However, ECT is often associated with either temporary or persistent cognitive side-effects, resulting in the failure of wider prescription. Of which, retrograde amnesia is the most commonly reported cognitive side-effect. Most studies found a recalling deficit in autobiographical memories to be short-term, although a few have reported more persistent amnesic effects. Little is known about ECT-related amnesic effects in Asian population. Hence, this study aims to resolve conflicting findings, as well as to better elucidate the effects of ECT on cognitive functioning in a local sample. Method: 12 patients underwent bilateral ECT under the care of Psychological Medicine Department, Tan Tock Seng Hospital, Singapore. Participants’ cognition and level of functioning were assessed at four time-points: before ECT, between the third and fourth induced seizure, at the end of the whole course of ECT, and two months after the index course of ECT. Results: It was found that Global Assessment of Functioning scores increased significantly at the completion of ECT. Case-by-case analyses also revealed an overall improvement in Personal Semantic and Autobiographical memory two months after the index course of ECT. A transient dip in both personal semantic and autobiographical memory scores was observed in one participant between the third and fourth induced seizure, but subsequently resolved and showed better performance than at baseline. Conclusions: The findings of this study suggest that ECT is an effective form of treatment to alleviate the severity of symptoms of the diagnosis. ECT does not affect attention, language, executive functioning, personal semantic and autobiographical memory adversely. The findings suggest that Asian patients may respond to bilateral ECT differently from Western samples.

Keywords: electroconvulsive therapy (ECT), autobiographical memory, cognitive impairment, psychiatric disorder

Procedia PDF Downloads 173
1144 A Review Paper on Data Security in Precision Agriculture Using Internet of Things

Authors: Tonderai Muchenje, Xolani Mkhwanazi

Abstract:

Precision agriculture uses a number of technologies, devices, protocols, and computing paradigms to optimize agricultural processes. Big data, artificial intelligence, cloud computing, and edge computing are all used to handle the huge amounts of data generated by precision agriculture. However, precision agriculture is still emerging and has a low level of security features. Furthermore, future solutions will demand data availability and accuracy as key points to help farmers, and security is important to build robust and efficient systems. Since precision agriculture comprises a wide variety and quantity of resources, security addresses issues such as compatibility, constrained resources, and massive data. Moreover, conventional protection schemes used in the traditional internet may not be useful for agricultural systems, creating extra demands and opportunities. Therefore, this paper aims at reviewing state of the art of precision agriculture security, particularly in open field agriculture, discussing its architecture, describing security issues, and presenting the major challenges and future directions.

Keywords: precision agriculture, security, IoT, EIDE

Procedia PDF Downloads 64
1143 Cloud Monitoring and Performance Optimization Ensuring High Availability and Security

Authors: Inayat Ur Rehman, Georgia Sakellari

Abstract:

Cloud computing has evolved into a vital technology for businesses, offering scalability, flexibility, and cost-effectiveness. However, maintaining high availability and optimal performance in the cloud is crucial for reliable services. This paper explores the significance of cloud monitoring and performance optimization in sustaining the high availability of cloud-based systems. It discusses diverse monitoring tools, techniques, and best practices for continually assessing the health and performance of cloud resources. The paper also delves into performance optimization strategies, including resource allocation, load balancing, and auto-scaling, to ensure efficient resource utilization and responsiveness. Addressing potential challenges in cloud monitoring and optimization, the paper offers insights into data security and privacy considerations. Through this thorough analysis, the paper aims to underscore the importance of cloud monitoring and performance optimization for ensuring a seamless and highly available cloud computing environment.

Keywords: cloud computing, cloud monitoring, performance optimization, high availability

Procedia PDF Downloads 28
1142 Collaborative and Context-Aware Learning Approach Using Mobile Technology

Authors: Sameh Baccari, Mahmoud Neji

Abstract:

In recent years, the rapid developments on mobile devices and wireless technologies enable new dimension capabilities for the learning domain. This dimension facilitates people daily activities and shortens the distances between individuals. When these technologies have been used in learning, a new paradigm has been emerged giving birth to mobile learning. Because of the mobility feature, m-learning courses have to be adapted dynamically to the learner’s context. The main challenge in context-aware mobile learning is to develop an approach building the best learning resources according to dynamic learning situations. In this paper, we propose a context-aware mobile learning system called Collaborative and Context-aware Mobile Learning System (CCMLS). It takes into account the requirements of Mobility, Collaboration and Context-Awareness. This system is based on the semantic modeling of the learning context and the learning content. The adaptation part of this approach is made up of adaptation rules to propose and select relevant resources, learning partners and learning activities based not only on the user’s needs, but also on its current context.

Keywords: mobile learning, mobile technologies, context-awareness, collaboration, semantic web, adaptation engine, adaptation strategy, learning object, learning context

Procedia PDF Downloads 280
1141 Development of Web-Based Remote Desktop to Provide Adaptive User Interfaces in Cloud Platform

Authors: Shuen-Tai Wang, Hsi-Ya Chang

Abstract:

Cloud virtualization technologies are becoming more and more prevalent, cloud users usually encounter the problem of how to access to the virtualized remote desktops easily over the web without requiring the installation of special clients. To resolve this issue, we took advantage of the HTML5 technology and developed web-based remote desktop. It permits users to access the terminal which running in our cloud platform from anywhere. We implemented a sketch of web interface following the cloud computing concept that seeks to enable collaboration and communication among users for high performance computing. Given the development of remote desktop virtualization, it allows to shift the user’s desktop from the traditional PC environment to the cloud platform, which is stored on a remote virtual machine rather than locally. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. This is also made possible by the low administrative costs as well as relatively inexpensive end-user terminals and reduced energy expenses.

Keywords: virtualization, remote desktop, HTML5, cloud computing

Procedia PDF Downloads 314
1140 Cloud-Based Dynamic Routing with Feedback in Formal Methods

Authors: Jawid Ahmad Baktash, Mursal Dawodi, Tomokazu Nagata

Abstract:

With the rapid growth of Cloud Computing, Formal Methods became a good choice for the refinement of message specification and verification for Dynamic Routing in Cloud Computing. Cloud-based Dynamic Routing is becoming increasingly popular. We propose feedback in Formal Methods for Dynamic Routing and Cloud Computing; the model and topologies show how to send messages from index zero to all others formally. The responsibility of proper verification becomes crucial with Dynamic Routing in the cloud. Formal Methods can play an essential role in the routing and development of Networks, and the testing of distributed systems. Event-B is a formal technique that consists of describing the problem rigorously and introduces solutions or details in the refinement steps. Event-B is a variant of B, designed for developing distributed systems and message passing of the dynamic routing. In Event-B and formal methods, the events consist of guarded actions occurring spontaneously rather than being invoked.

Keywords: cloud, dynamic routing, formal method, Pro-B, event-B

Procedia PDF Downloads 388
1139 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 78
1138 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment

Authors: Emmanuel Ogala

Abstract:

Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.

Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment

Procedia PDF Downloads 114
1137 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia

Authors: Tim Nedyalkov

Abstract:

A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. They are collecting, managing, and retaining large amounts of data in cloud environments makes information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.

Keywords: cloud compliance, cloud security, data governance, privacy protection

Procedia PDF Downloads 94
1136 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 294
1135 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 9
1134 Problem of Services Selection in Ubiquitous Systems

Authors: Malika Yaici, Assia Arab, Betitra Yakouben, Samia Zermani

Abstract:

Ubiquitous computing is nowadays a reality through the networking of a growing number of computing devices. It allows providing users with context aware information and services in a heterogeneous environment, anywhere and anytime. Selection of the best context-aware service, between many available services and providers, is a tedious problem. In this paper, a service selection method based on Constraint Satisfaction Problem (CSP) formalism is proposed. The services are considered as variables and domains; and the user context, preferences and providers characteristics are considered as constraints. The Backtrack algorithm is used to solve the problem to find the best service and provider which matches the user requirements. Even though this algorithm has an exponential complexity, but its use guarantees that the service, that best matches the user requirements, will be found. A comparison of the proposed method with the existing solutions finishes the paper.

Keywords: ubiquitous computing, services selection, constraint satisfaction problem, backtrack algorithm

Procedia PDF Downloads 211
1133 Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements

Authors: Yasmeen A. S. Essawy, Khaled Nassar

Abstract:

With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.

Keywords: building information modeling (BIM), elemental graph data model (EGDM), geometric and topological data models, graph theory

Procedia PDF Downloads 348
1132 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 200
1131 Cost-Based Analysis of Cloud and Traditional ERP Systems in Small and Medium Enterprises

Authors: Indu Saini, Ashu Khanna, S. K. Peddoju

Abstract:

Cloud computing is the new buzz word today attracting high interest among various domains like business enterprises, Particularly in Small and Medium Enterprises. As it is a pay-per-use model, SMEs have high expectations that adapting this model will not only make them flexible, hassle-free but also economic. In view of such expectations, this paper analyses the possibility of adapting cloud computing technologies in SMEs in light of economic concerns. In this paper, two hypotheses are developed to compare the average annual per-user costs of using Enterprise Resource Planning systems in two ways, The traditional approach and the cloud approach. A web based survey is conducted apart from the Interviews with the peers to collect the data across the selected SMEs and t-test is performed to compare both the technologies on the proposed hypothesis. Results achieved are produced and discussed.

Keywords: cloud computing, small and medium enterprises, enterprise resource solutions, interviews

Procedia PDF Downloads 307
1130 Investigating the Associative Network of Color Terms among Turkish University Students: A Cognitive-Based Study

Authors: R. Güçlü, E. Küçüksakarya

Abstract:

Word association (WA) gives the broadest information on how knowledge is structured in the human mind. Cognitive linguistics, psycholinguistics, and applied linguistics are the disciplines that consider WA tests as substantial in gaining insights into the very nature of the human cognitive system and semantic knowledge. In this study, Berlin and Kay’s basic 11 color terms (1969) are presented as the stimuli words to a total number of 300 Turkish university students. The responses are analyzed according to Fitzpatrick’s model (2007), including four categories, namely meaning-based responses, position-based responses, form-based responses, and erratic responses. In line with the findings, the responses to free association tests are expected to give much information about Turkish university students’ psychological structuring of vocabulary, especially morpho-syntactic and semantic relationships among words. To conclude, theoretical and practical implications are discussed to make an in-depth evaluation of how associations of basic color terms are represented in the mental lexicon of Turkish university students.

Keywords: color term, gender, mental lexicon, word association task

Procedia PDF Downloads 96
1129 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 292
1128 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base

Authors: Joanna Wielochowska, Aneta Wielochowska

Abstract:

Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.

Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia

Procedia PDF Downloads 103
1127 Cellular Automata Using Fractional Integral Model

Authors: Yasser F. Hassan

Abstract:

In this paper, a proposed model of cellular automata is studied by means of fractional integral function. A cellular automaton is a decentralized computing model providing an excellent platform for performing complex computation with the help of only local information. The paper discusses how using fractional integral function for representing cellular automata memory or state. The architecture of computing and learning model will be given and the results of calibrating of approach are also given.

Keywords: fractional integral, cellular automata, memory, learning

Procedia PDF Downloads 383
1126 The Neurofunctional Dissociation between Animal and Tool Concepts: A Network-Based Model

Authors: Skiker Kaoutar, Mounir Maouene

Abstract:

Neuroimaging studies have shown that animal and tool concepts rely on distinct networks of brain areas. Animal concepts depend predominantly on temporal areas while tool concepts rely on fronto-temporo-parietal areas. However, the origin of this neurofunctional distinction for processing animal and tool concepts remains still unclear. Here, we address this question from a network perspective suggesting that the neural distinction between animals and tools might reflect the differences in their structural semantic networks. We build semantic networks for animal and tool concepts derived from McRae and colleagues’s behavioral study conducted on a large number of participants. These two networks are thus analyzed through a large number of graph theoretical measures for small-worldness: centrality, clustering coefficient, average shortest path length, as well as resistance to random and targeted attacks. The results indicate that both animal and tool networks have small-world properties. More importantly, the animal network is more vulnerable to targeted attacks compared to the tool network a result that correlates with brain lesions studies.

Keywords: animals, tools, network, semantics, small-worls, resilience to damage

Procedia PDF Downloads 517
1125 Network Connectivity Knowledge Graph Using Dwave Quantum Hybrid Solvers

Authors: Nivedha Rajaram

Abstract:

Hybrid Quantum solvers have been given prime focus in recent days by computation problem-solving domain industrial applications. D’Wave Quantum Computers are one such paragon of systems built using quantum annealing mechanism. Discrete Quadratic Models is a hybrid quantum computing model class supplied by D’Wave Ocean SDK - a real-time software platform for hybrid quantum solvers. These hybrid quantum computing modellers can be employed to solve classic problems. One such problem that we consider in this paper is finding a network connectivity knowledge hub in a huge network of systems. Using this quantum solver, we try to find out the prime system hub, which acts as a supreme connection point for the set of connected computers in a large network. This paper establishes an innovative problem approach to generate a connectivity system hub plot for a set of systems using DWave ocean SDK hybrid quantum solvers.

Keywords: quantum computing, hybrid quantum solver, DWave annealing, network knowledge graph

Procedia PDF Downloads 91
1124 From E-Government to Cloud-Government Challenges of Jordanian Citizens' Acceptance for Public Services

Authors: Abeer Alkhwaldi, Mumtaz Kamala

Abstract:

On the inception of the third millennium, there is much evidence that cloud technologies have become the strategic trend for many governments not only developed countries (e.g., UK, Japan, and USA), but also developing countries (e.g. Malaysia and the Middle East region), who have launched cloud computing movements for enhanced standardization of IT resources, cost reduction, and more efficient public services. Therefore, cloud-based e-government services considered as one of the high priorities for government agencies in Jordan. Although of their phenomenal evolution, government cloud-services still suffering from the adoption challenges of e-government initiatives (e.g. technological, human-aspects, social, and financial) which need to be considered carefully by governments contemplating its implementation. This paper presents a pilot study to investigate the citizens' perception of the extent in which these challenges affect the acceptance and use of cloud computing in Jordanian public sector. Based on the data analysis collected using online survey some important challenges were identified. The results can help to guide successful acceptance of cloud-based e-government services in Jordan.

Keywords: challenges, cloud computing, e-government, acceptance, Jordan

Procedia PDF Downloads 403