Search results for: skyline queries
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 113

Search results for: skyline queries

23 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System

Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad

Abstract:

The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.

Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3

Procedia PDF Downloads 202
22 Flood Simulation and Forecasting for Sustainable Planning of Response in Municipalities

Authors: Mariana Damova, Stanko Stankov, Emil Stoyanov, Hristo Hristov, Hermand Pessek, Plamen Chernev

Abstract:

We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It will be federated with the DestinE platform.

Keywords: flood simulation, AI, Earth observation, e-Infrastructure, flood forecasting, flood areas localization, response planning, resource estimation

Procedia PDF Downloads 21
21 Assessment of Students Skills in Error Detection in SQL Classes using Rubric Framework - An Empirical Study

Authors: Dirson Santos De Campos, Deller James Ferreira, Anderson Cavalcante Gonçalves, Uyara Ferreira Silva

Abstract:

Rubrics to learning research provide many evaluation criteria and expected performance standards linked to defined student activity for learning and pedagogical objectives. Despite the rubric being used in education at all levels, academic literature on rubrics as a tool to support research in SQL Education is quite rare. There is a large class of SQL queries is syntactically correct, but certainly, not all are semantically correct. Detecting and correcting errors is a recurring problem in SQL education. In this paper, we usthe Rubric Abstract Framework (RAF), which consists of steps, that allows us to map the information to measure student performance guided by didactic objectives defined by the teacher as long as it is contextualized domain modeling by rubric. An empirical study was done that demonstrates how rubrics can mitigate student difficulties in finding logical errors and easing teacher workload in SQL education. Detecting and correcting logical errors is an important skill for students. Researchers have proposed several ways to improve SQL education because understanding this paradigm skills are crucial in software engineering and computer science. The RAF instantiation was using in an empirical study developed during the COVID-19 pandemic in database course. The pandemic transformed face-to-face and remote education, without presential classes. The lab activities were conducted remotely, which hinders the teaching-learning process, in particular for this research, in verifying the evidence or statements of knowledge, skills, and abilities (KSAs) of students. Various research in academia and industry involved databases. The innovation proposed in this paper is the approach used where the results obtained when using rubrics to map logical errors in query formulation have been analyzed with gains obtained by students empirically verified. The research approach can be used in the post-pandemic period in both classroom and distance learning.

Keywords: rubric, logical error, structured query language (SQL), empirical study, SQL education

Procedia PDF Downloads 190
20 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 324
19 What Happens When We Try to Bridge the Science-Practice Gap? An Example from the Brazilian Native Vegetation Protection Law

Authors: Alice Brites, Gerd Sparovek, Jean Paul Metzger, Ricardo Rodrigues

Abstract:

The segregation between science and policy in decision making process hinders nature conservation efforts worldwide. Scientists have been criticized for not producing information that leads to effective solutions for environmental problems. In an attempt to bridge this gap between science and practice, we conducted a project aimed at supporting the implementation of the Brazilian Native Vegetation Protection Law (NVPL) implementation in São Paulo State (SP), Brazil. To do so, we conducted multiple open meetings with the stakeholders involved in this discussion. Throughout this process, we raised stakeholders' demands for scientific information and brought feedbacks about our findings. However, our main scientific advice was not taken into account during the NVPL implementation in SP. The NVPL has a mechanism that exempts landholders who converted native vegetation without offending the legislation in place at the time of the conversion from restoration requirements. We found out that there were no accurate spatialized data for native vegetation cover before the 1960s. Thus, the initial benchmark for the mechanism application should be the 1965 Brazilian Forest Act. Even so, SP kept the 1934 Brazilian Forest Act as the initial legal benchmark for the law application. This decision implies the use of a probabilistic native vegetation map that has uncertainty and subjectivity as its intrinsic characteristics, thus its use can lead to legal queries, corruption, and an unfair benefit application. But why this decision was made even after the scientific advice was vastly divulgated? We raised some possible reasons to explain it. First, the decision was made during a government transition, showing that circumstantial political events can overshadow scientific arguments. Second, the debate about the NVPL in SP was not pacified and powerful stakeholders could benefit from the confusion created by this decision. Finally, the native vegetation protection mechanism is a complex issue, with many technical aspects that can be hard to understand for a non-specialized courtroom, such as the one that made the final decision at SP. This example shows that science and decision-makers still have a long way ahead to improve their way to interact and that science needs to find its way to be heard above the political buzz.

Keywords: Brazil, forest act, science-based dialogue, science-policy interface

Procedia PDF Downloads 122
18 Advancing Customer Service Management Platform: Case Study of Social Media Applications

Authors: Iseoluwa Bukunmi Kolawole, Omowunmi Precious Isreal

Abstract:

Social media has completely revolutionized the ways communication used to take place even a decade ago. It makes use of computer mediated technologies which helps in the creation of information and sharing. Social media may be defined as the production, consumption and exchange of information across platforms for social interaction. The social media has become a forum in which customer’s look for information about companies to do business with and request answers to questions about their products and services. Customer service may be termed as a process of ensuring customer’s satisfaction by meeting and exceeding their wants. In delivering excellent customer service, knowing customer’s expectations and where they are reaching out is important in meeting and exceeding customer’s want. Facebook is one of the most used social media platforms among others which also include Twitter, Instagram, Whatsapp and LinkedIn. This indicates customers are spending more time on social media platforms, therefore calls for improvement in customer service delivery over the social media pages. Millions of people channel their issues, complaints, complements and inquiries through social media. This study have being able to identify what social media customers want, their expectations and how they want to be responded to by brands and companies. However, the applied research methodology used in this paper was a mixed methods approach. The authors of d paper used qualitative method such as gathering critical views of experts on social media and customer relationship management to analyse the impacts of social media on customer's satisfaction through interviews. The authors also used quantitative such as online survey methods to address issues at different stages and to have insight about different aspects of the platforms i.e. customer’s and company’s perception about the effects of social media. Thereby exploring and gaining better understanding of how brands make use of social media as a customer relationship management tool. And an exploratory research approach strategy was applied analysing how companies need to create good customer support using social media in order to improve good customer service delivery, customer retention and referrals. Therefore many companies have preferred social media platform application as a medium of handling customer’s queries and ensuring their satisfaction, this is because social media tools are considered more transparent and effective in its operations when dealing with customer relationship management.

Keywords: brands, customer service, information, social media

Procedia PDF Downloads 268
17 Musictherapy and Gardentherapy: A Systemic Approach for the Life Quality of the PsychoPhysical Disability

Authors: Adriana De Serio, Donato Forenza

Abstract:

Aims. In this experimental research the Authors present the methodological plan “Musictherapy and Gardentherapy” that they created interconnected with the garden landscape ecosystems and aimed at PsychoPhysical Disability (MusGarPPhyD). In the context of the environmental education aimed at spreading the landscape culture and its values, it’s necessary to develop a solid perception of the environment sustainability to implement a multidimensional approach that pays attention to the conservation and enhancement of gardens and natural environments. The result is an improvement in the life quality also in compliance with the objectives of the European Agenda 2030. The MusGarPPhyD can help professionals such as musictherapists and environmental and landscape researchers strengthen subjects' motivation to learn to deal with the psychophysical discomfort associated with disability and to cope with the distress and the psychological fragility and the loneliness and the social seclusion and to promote productive social relationships. Materials and Methods. The MusGarPPhyD was implemented in multiple spaces. The musictherapy treatments took place first inside residential therapeutic centres and then in the garden landscape ecosystem. Patients: twenty, set in two groups. Weekly-sessions (50’) for three months. Methodological phases: - Phase P1. MusicTherapy treatments for each group in the indoor spaces. - Phase P2. MusicTherapy sessions inside the gardens. After each Phase, P1 and P2: - a Questionnaire for each patient (ten items / liking-indices) was administrated at t0 time, during the treatment and at tn time at the end of the treatment. - Monitoring of patients' behavioral responses through assessment scales, matrix, table and graph system. MusicTherapy methodology: pazient Sonorous-Musical Anamnesis, Musictherapy Assessment Document, Observation Protocols, Bodily-Environmental-Rhythmical-Sonorous-Vocal-Energy production first indoors and then outside, sonorous-musical instruments and edible instruments made by the Author/musictherapist with some foods; Administration of Patient-Environment-Music Index at time to and tn, to estimate the patient’s behavior evolution, Musictherapeutic Advancement Index. Results. The MusGarPPhyD can strengthen the individual sense of identity and improve the psychophysical skills and the resilience to face and to overcome the difficulties caused by the congenital /acquired disability. The multi-sensory perceptions deriving from contact with the plants in the gardens improve the psychological well-being and regulate the physiological parameters such as blood pressure, cardiac and respiratory rhythm, reducing the cholesterol levels. The secretions of the peptide hormones endorphins and the endogenous opioids enkephalins increase and bring a state of patient’s tranquillity and a better mood. The subjects showed a preference for musictherapy treatments within a setting made up of gardens and peculiar landscape systems. This resulted in greater health benefits. Conclusions. The MusGarPPhyD contributes to reduce psychophysical tensions, anxiety, depression and stress, facilitating the connections between the cerebral hemispheres, thus also improving intellectual performances, self-confidence, motor skills and social interactions. Therefore it is necessary to design hospitals, rehabilitation centers, nursing homes, surrounded by gardens. Ecosystems of natural and urban parks and gardens create fascinating skyline and mosaics of landscapes rich in beauty and biodiversity. The MusGarPPhyD is useful for the health management promoting patient’s psychophysical activation, better mood/affective-tone and relastionships and contributing significantly to improving the life quality.

Keywords: musictherapy, gardentherapy, disability, life quality

Procedia PDF Downloads 72
16 Web Map Service for Fragmentary Rockfall Inventory

Authors: M. Amparo Nunez-Andres, Nieves Lantada

Abstract:

One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.

Keywords: geological risk, web mapping, WMS, rockfalls

Procedia PDF Downloads 160
15 Measuring the Resilience of e-Governments Using an Ontology

Authors: Onyekachi Onwudike, Russell Lock, Iain Phillips

Abstract:

The variability that exists across governments, her departments and the provisioning of services has been areas of concern in the E-Government domain. There is a need for reuse and integration across government departments which are accompanied by varying degrees of risks and threats. There is also the need for assessment, prevention, preparation, response and recovery when dealing with these risks or threats. The ability of a government to cope with the emerging changes that occur within it is known as resilience. In order to forge ahead with concerted efforts to manage reuse and integration induced risks or threats to governments, the ambiguities contained within resilience must be addressed. Enhancing resilience in the E-Government domain is synonymous with reducing risks governments face with provisioning of services as well as reuse of components across departments. Therefore, it can be said that resilience is responsible for the reduction in government’s vulnerability to changes. In this paper, we present the use of the ontology to measure the resilience of governments. This ontology is made up of a well-defined construct for the taxonomy of resilience. A specific class known as ‘Resilience Requirements’ is added to the ontology. This class embraces the concept of resilience into the E-Government domain ontology. Considering that the E-Government domain is a highly complex one made up of different departments offering different services, the reliability and resilience of the E-Government domain have become more complex and critical to understand. We present questions that can help a government access how prepared they are in the face of risks and what steps can be taken to recover from them. These questions can be asked with the use of queries. The ontology focuses on developing a case study section that is used to explore ways in which government departments can become resilient to the different kinds of risks and threats they may face. A collection of resilience tools and resources have been developed in our ontology to encourage governments to take steps to prepare for emergencies and risks that a government may face with the integration of departments and reuse of components across government departments. To achieve this, the ontology has been extended by rules. We present two tools for understanding resilience in the E-Government domain as a risk analysis target and the output of these tools when applied to resilience in the E-Government domain. We introduce the classification of resilience using the defined taxonomy and modelling of existent relationships based on the defined taxonomy. The ontology is constructed on formal theory and it provides a semantic reference framework for the concept of resilience. Key terms which fall under the purview of resilience with respect to E-Governments are defined. Terms are made explicit and the relationships that exist between risks and resilience are made explicit. The overall aim of the ontology is to use it within standards that would be followed by all governments for government-based resilience measures.

Keywords: E-Government, Ontology, Relationships, Resilience, Risks, Threats

Procedia PDF Downloads 337
14 Development of Requirements Analysis Tool for Medical Autonomy in Long-Duration Space Exploration Missions

Authors: Lara Dutil-Fafard, Caroline Rhéaume, Patrick Archambault, Daniel Lafond, Neal W. Pollock

Abstract:

Improving resources for medical autonomy of astronauts in prolonged space missions, such as a Mars mission, requires not only technology development, but also decision-making support systems. The Advanced Crew Medical System - Medical Condition Requirements study, funded by the Canadian Space Agency, aimed to create knowledge content and a scenario-based query capability to support medical autonomy of astronauts. The key objective of this study was to create a prototype tool for identifying medical infrastructure requirements in terms of medical knowledge, skills and materials. A multicriteria decision-making method was used to prioritize the highest risk medical events anticipated in a long-term space mission. Starting with those medical conditions, event sequence diagrams (ESDs) were created in the form of decision trees where the entry point is the diagnosis and the end points are the predicted outcomes (full recovery, partial recovery, or death/severe incapacitation). The ESD formalism was adapted to characterize and compare possible outcomes of medical conditions as a function of available medical knowledge, skills, and supplies in a given mission scenario. An extensive literature review was performed and summarized in a medical condition database. A PostgreSQL relational database was created to allow query-based evaluation of health outcome metrics with different medical infrastructure scenarios. Critical decision points, skill and medical supply requirements, and probable health outcomes were compared across chosen scenarios. The three medical conditions with the highest risk rank were acute coronary syndrome, sepsis, and stroke. Our efforts demonstrate the utility of this approach and provide insight into the effort required to develop appropriate content for the range of medical conditions that may arise.

Keywords: decision support system, event-sequence diagram, exploration mission, medical autonomy, scenario-based queries, space medicine

Procedia PDF Downloads 127
13 An Experiment with Science Popularization in Rural Schools of Sehore District in Madhya Pradesh, India

Authors: Peeyush Verma, Anil Kumar, Anju Rawlley, Chanchal Mehra

Abstract:

India's school-going population is largely served by an educational system that is, in most rural parts, stuck with methods that emphasize rote learning, endless examinations, and monotonous classroom activities. Rural government schools are generally seen as having poor infrastructure, poor support system and low motivation for teaching as well as learning. It was experienced during the survey of this project that there is lesser motivation of rural boys and girls to attend their schools and still less likely chances to study science, tabooed as “difficult”. An experiment was conducted with the help of Rural Knowledge Network Project through Department of Science and Technology, Govt of India in five remote villages of Sehore District in Madhya Pradesh (India) during 2012-2015. These schools are located about 50-70 Km away from Bhopal, the capital of Madhya Pradesh and can distinctively qualify as average rural schools. Three tier methodology was adapted to unfold the experiment. In first tier randomly selected boys and girls from these schools were taken to a daylong visit to the Regional Science Centre located in Bhopal. In second tier, randomly selected half of those who visited earlier were again taken to the Science Centre to make models of Science. And in third tier, all the boys and girls studying science were exposed to video lectures and study material through web. The results have shown an interesting face towards learning science among youths in rural schools through peer learning or incremental learning. The students who had little or no interest in learning science became good learners and queries started pouring in from the neighbourhood village as well as a few parents requested to take their wards in the project to learn science. The paper presented is a case study of the experiment conducted in five rural schools of Sehore District. It reflects upon the methodology of developing awareness and interest among students and finally engaging them in popularising science through peer-to-peer learning using incremental learning elements. The students, who had a poor perception about science initially, had changed their attitude towards learning science during the project period. The results of this case, however, cannot be generalised unless replicated in the same setting elsewhere.

Keywords: popularisation of science, science temper, incremental learning, peer-to-peer learning

Procedia PDF Downloads 315
12 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 371
11 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 363
10 Genome Sequencing, Assembly and Annotation of Gelidium Pristoides from Kenton-on-Sea, South Africa

Authors: Sandisiwe Mangali, Graeme Bradley

Abstract:

Genome is complete set of the organism's hereditary information encoded as either deoxyribonucleic acid or ribonucleic acid in most viruses. The three different types of genomes are nuclear, mitochondrial and the plastid genome and their sequences which are uncovered by genome sequencing are known as an archive for all genetic information and enable researchers to understand the composition of a genome, regulation of gene expression and also provide information on how the whole genome works. These sequences enable researchers to explore the population structure, genetic variations, and recent demographic events in threatened species. Particularly, genome sequencing refers to a process of figuring out the exact arrangement of the basic nucleotide bases of a genome and the process through which all the afore-mentioned genomes are sequenced is referred to as whole or complete genome sequencing. Gelidium pristoides is South African endemic Rhodophyta species which has been harvested in the Eastern Cape since the 1950s for its high economic value which is one motivation for its sequencing. Its endemism further motivates its sequencing for conservation biology as endemic species are more vulnerable to anthropogenic activities endangering a species. As sequencing, mapping and annotating the Gelidium pristoides genome is the aim of this study. To accomplish this aim, the genomic DNA was extracted and quantified using the Nucleospin Plank Kit, Qubit 2.0 and Nanodrop. Thereafter, the Ion Plus Fragment Library was used for preparation of a 600bp library which was then sequenced through the Ion S5 sequencing platform for two runs. The produced reads were then quality-controlled and assembled through the SPAdes assembler with default parameters and the genome assembly was quality assessed through the QUAST software. From this assembly, the plastid and the mitochondrial genomes were then sampled out using Gelidiales organellar genomes as search queries and ordered according to them using the Geneious software. The Qubit and the Nanodrop instruments revealed an A260/A280 and A230/A260 values of 1.81 and 1.52 respectively. A total of 30792074 reads were obtained and produced a total of 94140 contigs with resulted into a sequence length of 217.06 Mbp with N50 value of 3072 bp and GC content of 41.72%. A total length of 179281bp and 25734 bp was obtained for plastid and mitochondrial respectively. Genomic data allows a clear understanding of the genomic constituent of an organism and is valuable as foundation information for studies of individual genes and resolving the evolutionary relationships between organisms including Rhodophytes and other seaweeds.

Keywords: Gelidium pristoides, genome, genome sequencing and assembly, Ion S5 sequencing platform

Procedia PDF Downloads 150
9 A webGIS Methodology to Support Sediments Management in Wallonia

Authors: Nathalie Stephenne, Mathieu Veschkens, Stéphane Palm, Christophe Charlemagne, Jacques Defoux

Abstract:

According to Europe’s first River basin Management Plans (RBMPs), 56% of European rivers failed to achieve the good status targets of the Water Framework Directive WFD. In Central European countries such as Belgium, even more than 80% of rivers failed to achieve the WFD quality targets. Although the RBMP’s should reduce the stressors and improve water body status, their potential to address multiple stress situations is limited due to insufficient knowledge on combined effects, multi-stress, prioritization of measures, impact on ecology and implementation effects. This paper describes a webGis prototype developed for the Walloon administration to improve the communication and the management of sediment dredging actions carried out in rivers and lakes in the frame of RBMPs. A large number of stakeholders are involved in the management of rivers and lakes in Wallonia. They are in charge of technical aspects (client and dredging operators, organizations involved in the treatment of waste…), management (managers involved in WFD implementation at communal, provincial or regional level) or policy making (people responsible for policy compliance or legislation revision). These different kinds of stakeholders need different information and data to cover their duties but have to interact closely at different levels. Moreover, information has to be shared between them to improve the management quality of dredging operations within the ecological system. In the Walloon legislation, leveling dredged sediments on banks requires an official authorization from the administration. This request refers to spatial information such as the official land use map, the cadastral map, the distance to potential pollution sources. The production of a collective geodatabase can facilitate the management of these authorizations from both sides. The proposed internet system integrates documents, data input, integration of data from disparate sources, map representation, database queries, analysis of monitoring data, presentation of results and cartographic visualization. A prototype of web application using the API geoviewer chosen by the Geomatic department of the SPW has been developed and discussed with some potential users to facilitate the communication, the management and the quality of the data. The structure of the paper states the why, what, who and how of this communication tool.

Keywords: sediments, web application, GIS, rivers management

Procedia PDF Downloads 405
8 Stimulus-Response and the Innateness Hypothesis: Childhood Language Acquisition of “Genie”

Authors: Caroline Kim

Abstract:

Scholars have long disputed the relationship between the origins of language and human behavior. Historically, behaviorist psychologist B. F. Skinner argued that language is one instance of the general stimulus-response phenomenon that characterizes the essence of human behavior. Another, more recent approach argues, by contrast, that language is an innate cognitive faculty and does not arise from behavior, which might develop and reinforce linguistic facility but is not its source. Pinker, among others, proposes that linguistic defects arise from damage to the brain, both congenital and acquired in life. Much of his argument is based on case studies in which damage to the Broca’s and Wernicke’s areas of the brain results in loss of the ability to produce coherent grammatical expressions when speaking or writing; though affected speakers often utter quite fluent streams of sentences, the words articulated lack discernible semantic content. Pinker concludes on this basis that language is an innate component of specific, classically language-correlated regions of the human brain. Taking a notorious 1970s case of linguistic maladaptation, this paper queries the dominant materialist paradigm of language-correlated regions. Susan “Genie” Wiley was physically isolated from language interaction in her home and beaten by her father when she attempted to make any sort of sound. Though without any measurable resulting damage to the brain, Wiley was never able to develop the level of linguistic facility normally achieved in adulthood. Having received a negative reinforcement of language acquisition from her father and lacking the usual language acquisition period, in adulthood Wiley was able to develop language only at a quite limited level in later life. From a contemporary behaviorist perspective, this case confirms the possibility of language deficiency without brain pathology. Wiley’s potential language-determining areas in the brain were intact, and she was exposed to language later in her life, but she was unable to achieve the normal level of communication skills, deterring socialization. This phenomenon and others like it in the case limited literature on linguistic maladaptation pose serious clinical, scientific, and indeed philosophical difficulties for both of the major competing theories of language acquisition, innateness, and linguistic stimulus-response. The implications of such cases for future research in language acquisition are explored, with a particular emphasis on the interaction of innate capacity and stimulus-based development in early childhood.

Keywords: behaviorism, innateness hypothesis, language, Susan "Genie" Wiley

Procedia PDF Downloads 291
7 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings

Authors: Abdulwakeel B. Raji

Abstract:

This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.

Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence

Procedia PDF Downloads 135
6 Erasmus+ Program in Vocational Education: Effects of European International Mobility in Portuguese Vocational Schools

Authors: José Carlos Bronze, Carlinda Leite, Angélica Monteiro

Abstract:

The creation of the Erasmus Program in 1987 represented a milestone in promoting and funding international mobility in higher education in Europe. Its effects were so significant that they influenced the creation of the European Higher Education Area through the Bologna Process and ensured the program’s continuation and maintenance. Over the last decades, the escalating figures of participants and funds instigated significant scientific studies on the program's effects on higher education. More recently, in 2014, the program was renamed “Erasmus+” when it expanded into other fields of education, namely Vocational Education and Training (VET). Despite being now running in this field of education for a decade (2014-2024), its effects on VET remain less studied and less known, while the higher education field keeps attracting researchers’ attention. Given this gap, it becomes relevant to study the effects of E+ on VET, particularly in the priority domains of the Program: “Inclusion and Diversity,” “Participation in Democratic Life, Common Values and Civic Engagement,” “Environment and Fight Against Climate Change,” and “Digital Transformation.” This latter has been recently emphasized due to the COVID-19 pandemic that forced the so-called emergency remote teaching, leading schools to quickly transform and adapt to a new reality regardless of the preparedness levels of teachers and students. Together with the remaining E+ priorities, they directly relate to an emancipatory perspective of education sustained in soft skills such as critical thinking, intercultural awareness, autonomy, active citizenship, teamwork, and problem-solving, among others. Based on this situation, it is relevant to know the effects of E+ on the VET field, namely questioning how international mobility instigates digitalization processes and supports emancipatory queries therein. As an education field that more directly connects to hard skills and an instrumental approach oriented to the labor market’s needs, a study was conducted to determine the effects of international mobility on developing digital literacy and soft skills in the VET field. In methodological terms, the study used semi-structured interviews with teaching and non-teaching staff from three VET schools who are strongly active in the E+ Program. The interviewees were three headmasters, four mobility project managers, and eight teachers experienced in international mobility. The data was subjected to qualitative content analysis using the NVivo 14 application. The results show that E+ international mobility promotes and facilitates the use of digital technologies as a pedagogical resource at VET schools and enhances and generates students’ soft skills. In conclusion, E+ mobility in the VET field supports adopting the program’s priorities by increasing the teachers’ knowledge and use of digital resources and amplifying and generating participants’ soft skills.

Keywords: Erasmus international mobility, digital literacy, soft skills, vocational education and training

Procedia PDF Downloads 32
5 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 101
4 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 191
3 Ethical Decision-Making in AI and Robotics Research: A Proposed Model

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.

Keywords: ethical decision making, artificial intelligence, robotics, research

Procedia PDF Downloads 79
2 Stakeholder Perception in the Role of Short-term Accommodations on the Place Brand and Real Estate Development of Urban Areas: A Case Study of Malate, Manila

Authors: Virgilio Angelo Gelera Gener

Abstract:

This study investigates the role of short-term accommodations on the place brand and real estate development of urban areas. It aims to know the perceptions of the general public, real estate developers, as well as city and barangay-level local government units (LGUs) on how these lodgings affect the place brand and land value of a community. It likewise attempts to identify the personal and institutional variables having a great influence on said perceptions in order to provide a better understanding of these establishments and their relevance within urban localities. Using certain sources, Malate, Manila was identified to be the ideal study area of the thesis. This prompted the employment of mixed methods research as the study’s fundamental data gathering and analytical tool. Here, a survey with 350 locals was done, asking them questions that would answer the aforementioned queries. Thereafter, a Pearson Chi-square Test and Multinomial Logistic Regression (MLR) were utilized to determine the variables affecting their perceptions. There were also Focus Group Discussions (FGDs) with the three (3) most populated Malate barangays, as well as Key Informant Interviews (KIIs) with selected city officials and fifteen (15) real estate company representatives. With that, survey results showed that although a 1992 Department of Tourism (DOT) Circular regards short-term accommodations as lodgings mainly for travelers, most people actually use it for their private/intimate moments. Because of this, the survey further revealed that short-term accommodations exhibit a negative place brand among the respondents though they also believe that it’s still one of society’s most important economic players. Statistics from the Pearson Chi-square Test, on the other hand, indicate that there are fourteen (14) out of seventeen (17) variables exhibiting great influence on respondents’ perceptions. Whereas MLR findings show that being born in Malate and being part of a family household was the most significant regardless of socio-economic level and monthly household income. For the city officials, it was revealed that said lodgings are actually the second-highest earners in the City’s lodging industry. It was further stated that their zoning ordinance treats short-term accommodations just like any other lodging enterprise. So it’s perfectly legal for these establishments to situate themselves near residential areas and/or institutional structures. A sit down with barangays, on the other hand, recognized the economic benefits of short-term accommodations but likewise admitted that it contributes a negative place brand to the community. Lastly, real estate developers are amenable to having their projects built near short-term accommodations, for they do not have any bad views against it. They explained that their projects sites have always been motivated by suitability, liability, and marketability factors only. Overall, these findings merit a recalibration of the zoning ordinance and DOT Circular, as well as the imposition of regulations on their sexually suggestive roadside advertisements. Then, once relevant measures are refined for proper implementation, it can also pave the way for spatial interventions (like visual buffer corridors) to better address the needs of the locals, private groups, and government.

Keywords: estate planning, place brand, real estate development, short-term accommodations

Procedia PDF Downloads 165
1 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 108