Search results for: fuzzy semantic annotation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1302

Search results for: fuzzy semantic annotation

342 Applications of Artificial Neural Networks in Civil Engineering

Authors: Naci Büyükkaracığan

Abstract:

Artificial neural networks (ANN) is an electrical model based on the human brain nervous system and working principle. Artificial neural networks have been the subject of an active field of research that has matured greatly over the past 55 years. ANN now is used in many fields. But, it has been viewed that artificial neural networks give better results in particular optimization and control systems. There are requirements of optimization and control system in many of the area forming the subject of civil engineering applications. In this study, the first artificial intelligence systems are widely used in the solution of civil engineering systems were examined with the basic principles and technical aspects. Finally, the literature reviews for applications in the field of civil engineering were conducted and also artificial intelligence techniques were informed about the study and its results.

Keywords: artificial neural networks, civil engineering, Fuzzy logic, statistics

Procedia PDF Downloads 404
341 A Proposed Framework for Software Redocumentation Using Distributed Data Processing Techniques and Ontology

Authors: Laila Khaled Almawaldi, Hiew Khai Hang, Sugumaran A. l. Nallusamy

Abstract:

Legacy systems are crucial for organizations, but their intricacy and lack of documentation pose challenges for maintenance and enhancement. Redocumentation of legacy systems is vital for automatically or semi-automatically creating documentation for software lacking sufficient records. It aims to enhance system understandability, maintainability, and knowledge transfer. However, existing redocumentation methods need improvement in data processing performance and document generation efficiency. This stems from the necessity to efficiently handle the extensive and complex code of legacy systems. This paper proposes a method for semi-automatic legacy system re-documentation using semantic parallel processing and ontology. Leveraging parallel processing and ontology addresses current challenges by distributing the workload and creating documentation with logically interconnected data. The paper outlines challenges in legacy system redocumentation and suggests a method of redocumentation using parallel processing and ontology for improved efficiency and effectiveness.

Keywords: legacy systems, redocumentation, big data analysis, parallel processing

Procedia PDF Downloads 40
340 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 58
339 Gestural Pragmatic Inference among Primates: An Experimental Approach

Authors: Siddharth Satishchandran, Brian Khumalo

Abstract:

Humans are able to derive semantic content from syntactic and pragmatic sources. Multimodal evidence from signaling theory, which examines communication between individuals within and across species, suggests that non-human primates possess similar syntactic and pragmatic capabilities. However, the extent remains unknown because primate pragmatics are relatively under-examined. Our paper reviews research within communication theory amongst non-human primates to understand current theoretical trends. We examine evidence for primate pragmatic capacities through observational, experimental, and theoretical work on gestures. Given fragmented theoretical perspectives, we provide a unified framework of communication for future research that contextualizes the available research under code biology. To achieve this, we rely on biological semiotics (biosemiotics), the philosophy of biology investigating prelinguistic meaning-making as a function of signs and codes. We close by discussing areas of potential research for studying gestural pragmatics amongst non-human primates, particularly chimpanzees (Pan troglodytes), Diana monkeys (Cercopithecus diana), and other potential candidates.

Keywords: pragmatics, non-human primates, gestural communication, biological semiotics

Procedia PDF Downloads 34
338 The Influence of Emotional Intelligence Skills on Innovative Start-Ups Coaching: A Neuro-Management Approach

Authors: Alina Parincu, Giuseppe Empoli, Alexandru Capatina

Abstract:

The purpose of this paper is to identify the most influential predictors of emotional intelligence skills, in the case of 20 business innovation coaches, on the co-creation of knowledge through coaching services delivered to innovative start-ups from Europe, funded through Horizon 2020 – SME Instrument. We considered the emotional intelligence skills (self-awareness, self-regulation, motivation, empathy and social skills) as antecedent conditions of the outcome: the quality of coaching services, perceived by the entrepreneurs who received funding within SME instrument, using fuzzy-sets qualitative comparative analysis (fsQCA) approach. The findings reveal that emotional intelligence skills, trained with neuro-management techniques, were associated with increased goal-focused business coaching skills.

Keywords: neuro-management, innovative start-ups, business coaching, fsQCA

Procedia PDF Downloads 167
337 Application of Fuzzy Multiple Criteria Decision Making for Flooded Risk Region Selection in Thailand

Authors: Waraporn Wimuktalop

Abstract:

This research will select regions which are vulnerable to flooding in different level. Mathematical principles will be systematically and rationally utilized as a tool to solve problems of selection the regions. Therefore the method called Multiple Criteria Decision Making (MCDM) has been chosen by having two analysis standards, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytic Hierarchy Process). There are three criterions that have been considered in this research. The first criterion is climate which is the rainfall. The second criterion is geography which is the height above mean sea level. The last criterion is the land utilization which both forest and agriculture use. The study found that the South has the highest risk of flooding, then the East, the Centre, the North-East, the West and the North, respectively.

Keywords: multiple criteria decision making, TOPSIS, analytic hierarchy process, flooding

Procedia PDF Downloads 226
336 Personal Knowledge Management: Systematic Review and Future Direction

Authors: Kuribachew Gizaw Tohiye, Monica Garfield

Abstract:

Personal knowledge management is the aspect of knowledge management that relates to the way in which individuals organize and manage their own set of knowledge. While in that respect, there has been research in this area for the past 25 years, it is at present necessary to speculate upon what research has been done and what we have discovered about this arena of knowledge management. In contrast to organizational knowledge management, which focuses on a firm’s profitability and competitiveness, personal knowledge management (PKM) is concerned with the person’s self-effectiveness, competence and success. People are concerned in managing their knowledge in order to become more efficient in a variety of personal and organizational interests. This study presents a systematic review of PKM studies. Articles with PKM concepts are reviewed with the objective of clearly defining PKM, identifying the benefits of PKM, classifying the tools that enable PKM and finding the research gaps to indicate future research directions in the area. Consequently, we have developed a definition of PKM and identified the benefits of PKM, including an understanding of who seeks PKM and for what. Tools enabling PKM are identified and classified under three categories Web 1.0, 2.0 and 3.0 and finally the research gap and future directions are suggested. Research which facilitates collaboration by using semantic technologies is suggested to be studied further to improve PKM effectiveness.

Keywords: personal knowledge management, knowledge management, organizational knowledge management, systematic review

Procedia PDF Downloads 324
335 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 63
334 A Preliminary Study for Design of Automatic Block Reallocation Algorithm with Genetic Algorithm Method in the Land Consolidation Projects

Authors: Tayfun Çay, Yasar İnceyol, Abdurrahman Özbeyaz

Abstract:

Land reallocation is one of the most important steps in land consolidation projects. Many different models were proposed for land reallocation in the literature such as Fuzzy Logic, block priority based land reallocation and Spatial Decision Support Systems. A model including four parts is considered for automatic block reallocation with genetic algorithm method in land consolidation projects. These stages are preparing data tables for a project land, determining conditions and constraints of land reallocation, designing command steps and logical flow chart of reallocation algorithm and finally writing program codes of Genetic Algorithm respectively. In this study, we designed the first three steps of the considered model comprising four steps.

Keywords: land consolidation, landholding, land reallocation, optimization, genetic algorithm

Procedia PDF Downloads 428
333 Argument Representation in Non-Spatial Motion Bahasa Melayu Based Conceptual Structure Theory

Authors: Nurul Jamilah Binti Rosly

Abstract:

The typology of motion must be understood as a change from one location to another. But from a conceptual point of view, motion can also occur in non-spatial contexts associated with human and social factors. Therefore, from the conceptual point of view, the concept of non-spatial motion involves the movement of time, ownership, identity, state, and existence. Accordingly, this study will focus on the lexical as shared, accept, be, store, and exist as the study material. The data in this study were extracted from the Database of Languages and Literature Corpus Database, Malaysia, which was analyzed using semantics and syntax concepts using Conceptual Structure Theory - Ray Jackendoff (2002). Semantic representations are represented in the form of conceptual structures in argument functions that include functions [events], [situations], [objects], [paths] and [places]. The findings show that the mapping of these arguments comprises three main stages, namely mapping the argument structure, mapping the tree, and mapping the role of thematic items. Accordingly, this study will show the representation of non- spatial Malay language areas.

Keywords: arguments, concepts, constituencies, events, situations, thematics

Procedia PDF Downloads 126
332 Unzipping the Stress Response Genes in Moringa oleifera Lam. through Transcriptomics

Authors: Vivian A. Panes, Raymond John S. Rebong, Miel Q. Diaz

Abstract:

Moringa oleifera Lam. is known mainly for its high nutritional value and medicinal properties contributing to its popular reputation as a 'miracle plant' in the tropical climates where it usually grows. The main objective of this study is to discover the genes and gene products involved in abiotic stress-induced activity that may impact the M. oleifera Lam. mature seeds as well as their corresponding functions. In this study, RNA-sequencing and de novo transcriptome assembly were performed using two assemblers, Trinity and Oases, which produced 177,417 and 120,818 contigs respectively. These transcripts were then subjected to various bioinformatics tools such as Blast2GO, UniProt, KEGG, and COG for gene annotation and the analysis of relevant metabolic pathways. Furthermore, FPKM analysis was performed to identify gene expression levels. The sequences were filtered according to the 'response to stress' GO term since this study dealt with stress response. Clustered Orthologous Groups (COG) showed that the highest frequencies of stress response gene functions were those of cytoskeleton which make up approximately 14% and 23% of stress-related sequences under Trinity and Oases respectively, recombination, repair and replication at 11% and 14% respectively, carbohydrate transport and metabolism at 23% and 9% respectively and defense mechanisms 16% and 12% respectively. KEGG pathway analysis determined the most abundant stress-response genes in the phenylpropanoid biosynthesis at counts of 187 and 166 pathways for Oases and Trinity respectively, purine metabolism at 123 and 230 pathways, and biosynthesis of antibiotics at 105 and 102. Unique and cumulative GO term counts revealed that majority of the stress response genes belonged to the category of cellular response to stress at cumulative counts of 1,487 to 2,187 for Oases and Trinity respectively, defense response at 754 and 1,255, and response to heat at 213 and 208, response to water deprivation at 229 and 228, and oxidative stress at 508 and 488. Lastly, FPKM was used to determine the levels of expression of each stress response gene. The most upregulated gene encodes for thiamine thiazole synthase chloroplastic-like enzyme which plays a significant role in DNA damage tolerance. Data analysis implies that M. oleifera stress response genes are directed towards the effects of climate change more than other stresses indicating the potential of M. oleifera for cultivation in harsh environments because it is resistant to climate change, pathogens, and foreign invaders.

Keywords: stress response, genes, Moringa oleifera, transcriptomics

Procedia PDF Downloads 143
331 The Language of Fliptop among Filipino Youth: A Discourse Analysis

Authors: Bong Borero Lumabao

Abstract:

This qualitative research is a study on the lines of Fliptop talks performed by the Fliptop rappers employing Finnegan’s (2008) discourse analysis. This paper aimed to analyze the phonological, morphological, and semantic features of the fliptop talk, to explore the structures in the lines of Fliptop among Filipino youth, and to uncover the various insights that can be gained from it. The corpora of the study included all the 20 Fliptop Videos downloaded from the Youtube Channel of Fliptop. Results revealed that Fliptop contains phonological features such as assonance, consonance, deletion, lengthening, and rhyming. Morphological features include acronym, affixation, blending, borrowing, code-mixing and switching, compounding, conversion or functional shifts, and dysphemism. Semantics presented the lexical category, meaning, and words used in the fliptop talks. Structure of Fliptop revolves on the personal attack (physical attributes), attack on the bars (rapping skills), extension: family members and friends, antithesis, profane words, figurative languages, sexual undertones, anime characters, homosexuality, and famous celebrities involvement.

Keywords: discourse analysis, fliptop talks, filipino youth, fliptop videos, Philippines

Procedia PDF Downloads 231
330 Empirical and Indian Automotive Equity Portfolio Decision Support

Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu

Abstract:

A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.

Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis

Procedia PDF Downloads 480
329 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, fault location, underground cable, wavelet transform

Procedia PDF Downloads 504
328 Evaluating Service Trustworthiness for Service Selection in Cloud Environment

Authors: Maryam Amiri, Leyli Mohammad-Khanli

Abstract:

Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.

Keywords: user preference, cloud service, trustworthiness, QoS metrics, prediction

Procedia PDF Downloads 284
327 In Silico Screening, Identification and Validation of Cryptosporidium hominis Hypothetical Protein and Virtual Screening of Inhibitors as Therapeutics

Authors: Arpit Kumar Shrivastava, Subrat Kumar, Rajani Kanta Mohapatra, Priyadarshi Soumyaranjan Sahu

Abstract:

Computational approaches to predict structure, function and other biological characteristics of proteins are becoming more common in comparison to the traditional methods in drug discovery. Cryptosporidiosis is a major zoonotic diarrheal disease particularly in children, which is caused primarily by Cryptosporidium hominis and Cryptosporidium parvum. Currently, there are no vaccines for cryptosporidiosis and recommended drugs are not effective. With the availability of complete genome sequence of C. hominis, new targets have been recognized for the development of effective and better drugs and/or vaccines. We identified a unique hypothetical epitopic protein in C. hominis genome through BLASTP analysis. A 3D model of the hypothetical protein was generated using I-Tasser server through threading methodology. The quality of the model was validated through Ramachandran plot by PROCHECK server. The functional annotation of the hypothetical protein through DALI server revealed structural similarity with human Transportin 3. Phylogenetic analysis for this hypothetical protein also showed C. hominis hypothetical protein (CUV04613) was the closely related to human transportin 3 protein. The 3D protein model is further subjected to virtual screening study with inhibitors from the Zinc Database by using Dock Blaster software. Docking study reported N-(3-chlorobenzyl) ethane-1,2-diamine as the best inhibitor in terms of docking score. Docking analysis elucidated that Leu 525, Ile 526, Glu 528, Glu 529 are critical residues for ligand–receptor interactions. The molecular dynamic simulation was done to access the reliability of the binding pose of inhibitor and protein complex using GROMACS software at 10ns time point. Trajectories were analyzed at each 2.5 ns time interval, among which, H-bond with LEU-525 and GLY- 530 are significantly present in MD trajectories. Furthermore, antigenic determinants of the protein were determined with the help of DNA Star software. Our study findings showed a great potential in order to provide insights in the development of new drug(s) or vaccine(s) for control as well as prevention of cryptosporidiosis among humans and animals.

Keywords: cryptosporidium hominis, hypothetical protein, molecular docking, molecular dynamics simulation

Procedia PDF Downloads 360
326 Improved Performance in Content-Based Image Retrieval Using Machine Learning Approach

Authors: B. Ramesh Naik, T. Venugopal

Abstract:

This paper presents a novel approach which improves the high-level semantics of images based on machine learning approach. The contemporary approaches for image retrieval and object recognition includes Fourier transforms, Wavelets, SIFT and HoG. Though these descriptors helpful in a wide range of applications, they exploit zero order statistics, and this lacks high descriptiveness of image features. These descriptors usually take benefit of primitive visual features such as shape, color, texture and spatial locations to describe images. These features do not adequate to describe high-level semantics of the images. This leads to a gap in semantic content caused to unacceptable performance in image retrieval system. A novel method has been proposed referred as discriminative learning which is derived from machine learning approach that efficiently discriminates image features. The analysis and results of proposed approach were validated thoroughly on WANG and Caltech-101 Databases. The results proved that this approach is very competitive in content-based image retrieval.

Keywords: CBIR, discriminative learning, region weight learning, scale invariant feature transforms

Procedia PDF Downloads 178
325 From the “Movement Language” to Communication Language

Authors: Mahmudjon Kuchkarov, Marufjon Kuchkarov

Abstract:

The origin of ‘Human Language’ is still a secret and the most interesting subject of historical linguistics. The core element is the nature of labeling or coding the things or processes with symbols and sounds. In this paper, we investigate human’s involuntary Paired Sounds and Shape Production (PSSP) and its contribution to the development of early human communication. Aimed at twenty-six volunteers who provided many physical movements with various difficulties, the research team investigated the natural, repeatable, and paired sounds and shape productions during human activities. The paper claims the involvement of Paired Sounds and Shape Production (PSSP) in the phonetic origin of some modern words and the existence of similarities between elements of PSSP with characters of the classic Latin alphabet. The results may be used not only as a supporting idea for existing theories but to create a closer look at some fundamental nature of the origin of the languages as well.

Keywords: body shape, body language, coding, Latin alphabet, merging method, movement language, movement sound, natural sound, origin of language, pairing, phonetics, sound and shape production, word origin, word semantic

Procedia PDF Downloads 242
324 Words of Peace in the Speeches of the Egyptian President, Abdulfattah El-Sisi: A Corpus-Based Study

Authors: Mohamed S. Negm, Waleed S. Mandour

Abstract:

The present study aims primarily at investigating words of peace (lexemes of peace) in the formal speeches of the Egyptian president Abdulfattah El-Sisi in a two-year span of time, from 2018 to 2019. This paper attempts to shed light not only on the contextual use of the antonyms, war and peace, but also it underpins quantitative analysis through the current methods of corpus linguistics. As such, the researchers have deployed a corpus-based approach in collecting, encoding, and processing 30 presidential speeches over the stated period (23,411 words and 25,541 tokens in total). Further, semantic fields and collocational networkzs are identified and compared statistically. Results have shown a significant propensity of adopting peace, including its relevant collocation network, textually and therefore, ideationally, at the expense of war concept which in most cases surfaces euphemistically through the noun conflict. The president has not justified the action of war with an honorable cause or a valid reason. Such results, so far, have indicated a positive sociopolitical mindset the Egyptian president possesses and moreover, reveal national and international fair dealing on arising issues.

Keywords: CADS, collocation network, corpus linguistics, critical discourse analysis

Procedia PDF Downloads 150
323 Use of Artificial Intelligence Based Models to Estimate the Use of a Spectral Band in Cognitive Radio

Authors: Danilo López, Edwin Rivas, Fernando Pedraza

Abstract:

Currently, one of the major challenges in wireless networks is the optimal use of radio spectrum, which is managed inefficiently. One of the solutions to existing problem converges in the use of Cognitive Radio (CR), as an essential parameter so that the use of the available licensed spectrum is possible (by secondary users), well above the usage values that are currently detected; thus allowing the opportunistic use of the channel in the absence of primary users (PU). This article presents the results found when estimating or predicting the future use of a spectral transmission band (from the perspective of the PU) for a chaotic type channel arrival behavior. The time series prediction method (which the PU represents) used is ANFIS (Adaptive Neuro Fuzzy Inference System). The results obtained were compared to those delivered by the RNA (Artificial Neural Network) algorithm. The results show better performance in the characterization (modeling and prediction) with the ANFIS methodology.

Keywords: ANFIS, cognitive radio, prediction primary user, RNA

Procedia PDF Downloads 418
322 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 184
321 Multi-Criteria Evaluation for the Selection Process of a Wind Power Plant's Location Using Choquet Integral

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

The objective of the present study is to select the most suitable location for a wind power plant station through Choquet integral method. The problem of selecting the location for a wind power station was considered as a multi-criteria decision-making problem. The essential and sub-criteria were specified and location selection was expressed in a hierarchic structure. Among the main criteria taken into account in this paper are wind potential, technical factors, social factors, transportation, and costs. The problem was solved by using different approaches of Choquet integral and the best location for a wind power station was determined. Then, the priority weights obtained from different Choquet integral approaches are compared and commented on.

Keywords: multi-criteria decision making, choquet integral, fuzzy sets, location of a wind power plant

Procedia PDF Downloads 408
320 A Guide to User-Friendly Bash Prompt: Adding Natural Language Processing Plus Bash Explanation to the Command Interface

Authors: Teh Kean Kheng, Low Soon Yee, Burra Venkata Durga Kumar

Abstract:

In 2022, as the future world becomes increasingly computer-related, more individuals are attempting to study coding for themselves or in school. This is because they have discovered the value of learning code and the benefits it will provide them. But learning coding is difficult for most people. Even senior programmers that have experience for a decade year still need help from the online source while coding. The reason causing this is that coding is not like talking to other people; it has the specific syntax to make the computer understand what we want it to do, so coding will be hard for normal people if they don’t have contact in this field before. Coding is hard. If a user wants to learn bash code with bash prompt, it will be harder because if we look at the bash prompt, we will find that it is just an empty box and waiting for a user to tell the computer what we want to do, if we don’t refer to the internet, we will not know what we can do with the prompt. From here, we can conclude that the bash prompt is not user-friendly for new users who are learning bash code. Our goal in writing this paper is to give an idea to implement a user-friendly Bash prompt in Ubuntu OS using Artificial Intelligent (AI) to lower the threshold of learning in Bash code, to make the user use their own words and concept to write and learn Bash code.

Keywords: user-friendly, bash code, artificial intelligence, threshold, semantic similarity, lexical similarity

Procedia PDF Downloads 138
319 A Multi-Agent Intelligent System for Monitoring Health Conditions of Elderly People

Authors: Ayman M. Mansour

Abstract:

In this paper, we propose a multi-agent intelligent system that is used for monitoring the health conditions of elderly people. Monitoring the health condition of elderly people is a complex problem that involves different medical units and requires continuous monitoring. Such expert system is highly needed in rural areas because of inadequate number of available specialized physicians or nurses. Such monitoring must have autonomous interactions between these medical units in order to be effective. A multi-agent system is formed by a community of agents that exchange information and proactively help one another to achieve the goal of elderly monitoring. The agents in the developed system are equipped with intelligent decision maker that arms them with the rule-based reasoning capability that can assist the physicians in making decisions regarding the medical condition of elderly people.

Keywords: fuzzy logic, inference system, monitoring system, multi-agent system

Procedia PDF Downloads 596
318 Cognition Technique for Developing a World Music

Authors: Haider Javed Uppal, Javed Yunas Uppal

Abstract:

In today's globalized world, it is necessary to develop a form of music that is able to evoke equal emotional responses among people from diverse cultural backgrounds. Indigenous cultures throughout history have developed their own music cognition, specifically in terms of the connections between music and mood. With the advancements in artificial intelligence technologies, it has become possible to analyze and categorize music features such as timbre, harmony, melody, and rhythm and relate them to the resulting mood effects experienced by listeners. This paper presents a model that utilizes a screenshot translator to convert music from different origins into waveforms, which are then analyzed using machine learning and information retrieval techniques. By connecting these waveforms with Thayer's matrix of moods, a mood classifier has been developed using fuzzy logic algorithms to determine the emotional impact of different types of music on listeners from various cultures.

Keywords: cognition, world music, artificial intelligence, Thayer’s matrix

Procedia PDF Downloads 75
317 Application of the DTC Control in the Photovoltaic Pumping System

Authors: M. N. Amrani, H. Abanou, A. Dib

Abstract:

In this paper, we proposed a strategy for optimizing the performance for a pumping structure constituted by an induction motor coupled to a centrifugal pump and improving existing results in this context. The considered system is supplied by a photovoltaic generator (GPV) through two static converters piloted in an independent manner. We opted for a maximum power point tracking (MPPT) control method based on the Neuro - Fuzzy, which is well known for its stability and robustness. To improve the induction motor performance, we use the concept of Direct Torque Control (DTC) and PID controller for motor speed to pilot the working of the induction motor. Simulations of the proposed approach give interesting results compared to the existing control strategies in this field. The model of the proposed system is simulated by MATLAB/Simulink.

Keywords: solar energy, pumping photovoltaic system, maximum power point tracking, direct torque Control (DTC), PID regulator

Procedia PDF Downloads 543
316 Batman Forever: The Economics of Overlapping Rights

Authors: Franziska Kaiser, Alexander Cuntz

Abstract:

When copyrighted comic characters are also protected under trademark laws, intellectual property (IP) rights can overlap. Arguably, registering a trademark can increase transaction costs for cross-media uses of characters, or it can favor advertise across a number of sales channels. In an application to book, movie, and video game publishing industries, we thus ask how creative reuse is affected in situations of overlapping rights and whether ‘fuzzy boundaries’ of right frameworks are, in fact, enhancing or decreasing content sales. We use a major U.S. Supreme Court decision as a quasi-natural experiment to apply an IV estimation in our analysis. We find that overlapping rights frameworks negatively affect creative reuses. At large, when copyright-protected comic characters are additionally registered as U.S. trademarks, they are less often reprinted and enter fewer video game productions while generating less revenue from game sales.

Keywords: copyright, fictional characters, trademark, reuse

Procedia PDF Downloads 205
315 Geographic Information System for District Level Energy Performance Simulations

Authors: Avichal Malhotra, Jerome Frisch, Christoph van Treeck

Abstract:

The utilization of semantic, cadastral and topological data from geographic information systems (GIS) has exponentially increased for building and urban-scale energy performance simulations. Urban planners, simulation scientists, and researchers use virtual 3D city models for energy analysis, algorithms and simulation tools. For dynamic energy simulations at city and district level, this paper provides an overview of the available GIS data models and their levels of detail. Adhering to different norms and standards, these models also intend to describe building and construction industry data. For further investigations, CityGML data models are considered for simulations. Though geographical information modelling has considerably many different implementations, extensions of virtual city data can also be made for domain specific applications. Highlighting the use of the extended CityGML models for energy researches, a brief introduction to the Energy Application Domain Extension (ADE) along with its significance is made. Consequently, addressing specific input simulation data, a workflow using Modelica underlining the usage of GIS information and the quantification of its significance over annual heating energy demand is presented in this paper.

Keywords: CityGML, EnergyADE, energy performance simulation, GIS

Procedia PDF Downloads 164
314 A Design for Supply Chain Model by Integrated Evaluation of Design Value and Supply Chain Cost

Authors: Yuan-Jye Tseng, Jia-Shu Li

Abstract:

To design a product with the given product requirement and design objective, there can be alternative ways to propose the detailed design specifications of the product. In the design modeling stage, alternative design cases with detailed specifications can be modeled to fulfill the product requirement and design objective. Therefore, in the design evaluation stage, it is required to perform an evaluation of the alternative design cases for deciding the final design. The purpose of this research is to develop a product evaluation model for evaluating the alternative design cases by integrated evaluating the criteria of functional design, Kansei design, and design for supply chain. The criteria in the functional design group include primary function, expansion function, improved function, and new function. The criteria in the Kansei group include geometric shape, dimension, surface finish, and layout. The criteria in the design for supply chain group include material, manufacturing process, assembly, and supply chain operation. From the point of view of value and cost, the criteria in the functional design group and Kansei design group represent the design value of the product. The criteria in the design for supply chain group represent the supply chain and manufacturing cost of the product. It is required to evaluate the design value and the supply chain cost to determine the final design. For the purpose of evaluating the criteria in the three criteria groups, a fuzzy analytic network process (FANP) method is presented to evaluate a weighted index by calculating the total relational values among the three groups. A method using the technique for order preference by similarity to ideal solution (TOPSIS) is used to compare and rank the design alternative cases according to the weighted index using the total relational values of the criteria. The final decision of a design case can be determined by using the ordered ranking. For example, the design case with the top ranking can be selected as the final design case. Based on the criteria in the evaluation, the design objective can be achieved with a combined and weighted effect of the design value and manufacturing cost. An example product is demonstrated and illustrated in the presentation. It shows that the design evaluation model is useful for integrated evaluation of functional design, Kansei design, and design for supply chain to determine the best design case and achieve the design objective.

Keywords: design for supply chain, design evaluation, functional design, Kansei design, fuzzy analytic network process, technique for order preference by similarity to ideal solution

Procedia PDF Downloads 317
313 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 165