Search results for: metadata tags
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 139

Search results for: metadata tags

19 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 49
18 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 184
17 Effects of Ubiquitous 360° Learning Environment on Clinical Histotechnology Competence

Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen

Abstract:

Rapid technological development and digitalization has affected also on higher education. During last twenty years multiple of electronic and mobile learning (e-learning, m-learning) platforms have been developed and have become prevalent in many universities and in the all fields of education. Ubiquitous learning (u-learning) is not that widely known or used. Ubiquitous learning environments (ULE) are the new era of computer-assisted learning. They are based on ubiquitous technology and computing that fuses the learner seamlessly into learning process by using sensing technology as tags, badges or barcodes and smart devices like smartphones and tablets. ULE combines real-life learning situations into virtual aspects and can be flexible used in anytime and anyplace. The aim of this study was to assess the effects of ubiquitous 360 o learning environment on higher education students’ clinical histotechnology competence. A quasi-experimental study design was used. 57 students in biomedical laboratory science degree program was assigned voluntarily to experiment (n=29) and to control group (n=28). Experimental group studied via ubiquitous 360o learning environment and control group via traditional web-based learning environment (WLE) in a 8-week educational intervention. Ubiquitous 360o learning environment (ULE) combined authentic learning environment (histotechnology laboratory), digital environment (virtual laboratory), virtual microscope, multimedia learning content, interactive communication tools, electronic library and quick response barcodes placed into authentic laboratory. Web-based learning environment contained equal content and components with the exception of the use of mobile device, interactive communication tools and quick response barcodes. Competence of clinical histotechnology was assessed by using knowledge test and self-report developed for this study. Data was collected electronically before and after clinical histotechnology course and analysed by using descriptive statistics. Differences among groups were identified by using Wilcoxon test and differences between groups by using Mann-Whitney U-test. Statistically significant differences among groups were identified in both groups (p<0.001). Competence scores in post-test were higher in both groups, than in pre-test. Differences between groups were very small and not statistically significant. In this study the learning environment have developed based on 360o technology and successfully implemented into higher education context. And students’ competence increases when ubiquitous learning environment were used. In the future, ULE can be used as a learning management system for any learning situation in health sciences. More studies are needed to show differences between ULE and WLE.

Keywords: competence, higher education, histotechnology, ubiquitous learning, u-learning, 360o

Procedia PDF Downloads 284
16 The Policia Internacional e de Defesa do Estado 1933–1969 and Valtiollinen Poliisi 1939–1948 on Screen: Comparing and Contrasting the Images of the Political Police in Portuguese and Finnish Films between the 1930s and the 1960s

Authors: Riikka Elina Kallio

Abstract:

“The walls have ears” phrase is defining the era of dictatorship in Portugal (1926–1974) and political unrest decades in Finland (1917–1948). The phrase is referring to the policing of the political, secret police, PIDE (Policia Internacional e de Defesa do Estado 1933–1969) in Portugal and VALPO (Valtiollinen Poliisi 1939–1948) in Finland. Free speech at any public space and even in private events could be fatal. The members of the PIDE/VALPO or informers/collaborators could be listening. Strict censorship under the Salazar´s regime was controlling media for example newspapers, music, and the film industry. Similarly, the politically affected censorship influenced the media in Finland in those unrest decades. This article examines the similarities and the differences in the images of the political police in Finland and Portugal, by analyzing Finnish and Portuguese films from the nineteen-thirties to nineteensixties. The text addresses two main research questions: what are the common and different features in the representations of the Finnish and Portuguese political police in films between the 1930s and 1960s, and how did the national censorship affect these representations? This study approach is interdisciplinary, and it combines film studies and criminology. Close reading is a practical qualitative method for analyzing films and in this study, close reading emphasizes the features of the police officer. Criminology provides the methodological tools for analysis of the police universal features and European common policies. The characterization of the police in this study is based on Robert Reiner´s 1980s and Timo Korander´s 2010s definitions of the police officer. The research material consisted of the Portuguese films from online film archives and Finnish films from Movie Making Finland -project´s metadata which offered suitable material by data mining the keywords such as poliisi, poliisipäällikkö and konstaapeli (police, police chief, police constable). The findings of this study suggest that even though there are common features of the images of the political police in Finland and Portugal, there are still national and cultural differences in the representations of the political police and policing.

Keywords: censorship, film studies, images, PIDE, political police, VALPO

Procedia PDF Downloads 71
15 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 94
14 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami

Authors: Alejandra G. Barbón, José Vila, Dania Vazquez

Abstract:

This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.

Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience

Procedia PDF Downloads 72
13 Modification of Escherichia coli PtolT Expression Vector via Site-Directed Mutagenesis

Authors: Yakup Ulusu, Numan Eczacıoğlu, İsa Gökçe, Helen Waller, Jeremy H. Lakey

Abstract:

Besides having the appropriate amino acid sequence to perform the function of proteins, it is important to have correct conformation after this sequence to process. To consist of this conformation depends on the amino acid sequence at the primary structure, hydrophobic interaction, chaperones and enzymes in charge of folding etc. Misfolded proteins are not functional and tend to be aggregated. Cysteine originating disulfide cross-links make stable this conformation of functional proteins. When two of the cysteine amino acids come side by side, disulfide bond is established that forms a cystine bridge. Due to this feature cysteine plays an important role on the formation of three-dimensional structure of many proteins. There are two cysteine amino acids (C44, C69) in the Tol-A-III protein. Unlike protein disulfide bonds from within his own, any non-specific cystine bridge causes a change in the three dimensional structure of the protein. Proteins can be expressed in various host cells as directly or fusion (chimeric). As a result of overproduction of the recombinant proteins, aggregation of insoluble proteins in the host cell can occur by forming a crystal structure called inclusion body. In general fusion proteins are produced for provide affinity tags to make proteins more soluble and production of some toxic proteins via fusion protein expression system like pTolT. Proteins can be modified by using a site-directed mutagenesis. By this way, creation of non-specific disulfide crosslinks can be prevented at fusion protein expression system via the present cysteine replaced by another amino acid such as serine, glycine or etc. To do this, we need; a DNA molecule that contains the gene that encodes for the target protein, required primers for mutation to be designed according to site directed mutagenesis reaction. This study was aimed to be replaced cysteine encoding codon TGT with serine encoding codon AGT. For this sense and reverse primers designed (given below) and used site-directed mutagenesis reaction. Several new copy of the template plasmid DNA has been formed with above mentioned mutagenic primers via polymerase chain reaction (PCR). PCR product consists of both the master template DNA (wild type) and the new DNA sequences containing mutations. Dpn-l endonuclease restriction enzyme which is specific for methylated DNA and cuts them to the elimination of the master template DNA. E. coli cells obtained after transformation were incubated LB medium with antibiotic. After purification of plasmid DNA from E. coli, the presence of the mutation was determined by DNA sequence analysis. Developed this new plasmid is called PtolT-δ.

Keywords: site directed mutagenesis, Escherichia coli, pTolT, protein expression

Procedia PDF Downloads 374
12 The Social Ecology of Serratia entomophila: Pathogen of Costelytra giveni

Authors: C. Watson, T. Glare, M. O'Callaghan, M. Hurst

Abstract:

The endemic New Zealand grass grub (Costelytra giveni, Coleoptera: Scarabaeidae) is an economically significant grassland pest in New Zealand. Due to their impacts on production within the agricultural sector, one of New Zealand's primary industries, several methods are being used to either control or prevent the establishment of new grass grub populations in the pasture. One such method involves the use of a biopesticide based on the bacterium Serratia entomophila. This species is one of the causative agents of amber disease, a chronic disease of the larvae which results in death via septicaemia after approximately 2 to 3 months. The ability of S. entomophila to cause amber disease is dependant upon the presence of the amber disease associated plasmid (pADAP), which encodes for the key virulence determinants required for the establishment and maintenance of the disease. Following the collapse of grass grub populations within the soil, resulting from either natural population build-up or application of the bacteria, non-pathogenic plasmid-free Serratia strains begin to predominate within the soil. Whilst the interactions between S. entomophila and grass grub larvae are well studied, less information is known on the interactions between plasmid-bearing and plasmid-free strains, particularly the potential impact of these interactions upon the efficacy of an applied biopesticide. Using a range of constructed strains with antibiotic tags, in vitro (broth culture) and in vivo (soil and larvae) experiments were conducted using inoculants comprised of differing ratios of isogenic pathogenic and non-pathogenic Serratia strains, enabling the relative growth of pADAP+ and pADAP- strains under competition conditions to be assessed. In nutrient-rich, the non-pathogenic pADAP- strain outgrew the pathogenic pADAP+ strain by day 3 when inoculated in equal quantities, and by day 5 when applied as the minority inoculant, however, there was an overall gradual decline in the number of viable bacteria for both strains over a 7-day period. Similar results were obtained in additional experiments using the same strains and continuous broth cultures re-inoculated at 24-hour intervals, although in these cultures, the viable cell count did not diminish over the 7-day period. When the same ratios were assessed in soil microcosms with limited available nutrients, the strains remained relatively stable over a 2-month period. Additionally, in vivo grass grub co-infections assays using the same ratios of tagged Serratia strains revealed similar results to those observed in the soil, but there was also evidence of horizontal transfer of pADAP from the pathogenic to the non-pathogenic strain within the larval gut after a period of 4 days. Whilst the influence of competition is more apparent in broth cultures than within the soil or larvae, further testing is required to determine whether this competition between pathogenic and non-pathogenic Serratia strains has any influence on efficacy and disease progression, and how this may impact on the ability of S. entomophila to cause amber disease within grass grub larvae when applied as a biopesticide.

Keywords: biological control, entomopathogen, microbial ecology, New Zealand

Procedia PDF Downloads 154
11 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 125
10 Studying Language of Immediacy and Language of Distance from a Corpus Linguistic Perspective: A Pilot Study of Evaluation Markers in French Television Weather Reports

Authors: Vince Liégeois

Abstract:

Language of immediacy and distance: Within their discourse theory, Koch & Oesterreicher establish a distinction between a language of immediacy and a language of distance. The former refers to those discourses which are oriented more towards a spoken norm, whereas the latter entails discourses oriented towards a written norm, regardless of whether they are realised phonically or graphically. This means that an utterance can be realised phonically but oriented more towards the written language norm (e.g., a scientific presentation or eulogy) or realised graphically but oriented towards a spoken norm (e.g., a scribble or chat messages). Research desiderata: The methodological approach from Koch & Oesterreicher has often been criticised for not providing a corpus-linguistic methodology, which makes it difficult to work with quantitative data or address large text collections within this research paradigm. Consequently, the Koch & Oesterreicher approach has difficulties gaining ground in those research areas which rely more on corpus linguistic research models, like text linguistics and LSP-research. A combinatory approach: Accordingly, we want to establish a combinatory approach with corpus-based linguistic methodology. To this end, we propose to (i) include data about the context of an utterance (e.g., monologicity/dialogicity, familiarity with the speaker) – which were called “conditions of communication” in the original work of Koch & Oesterreicher – and (ii) correlate the linguistic phenomenon at the centre of the inquiry (e.g., evaluation markers) to a group of linguistic phenomena deemed typical for either distance- or immediacy-language. Based on these two parameters, linguistic phenomena and texts could then be mapped on an immediacy-distance continuum. Pilot study: To illustrate the benefits of this approach, we will conduct a pilot study on evaluation phenomena in French television weather reports, a form of domain-sensitive discourse which has often been cited as an example of a “text genre”. Within this text genre, we will look at so-called “evaluation markers,” e.g., fixed strings like bad weather, stifling hot, and “no luck today!”. These evaluation markers help to communicate the coming weather situation towards the lay audience but have not yet been studied within the Koch & Oesterreicher research paradigm. Accordingly, we want to figure out whether said evaluation markers are more typical for those weather reports which tend more towards immediacy or those which tend more towards distance. To this aim, we collected a corpus with different kinds of television weather reports,e.g., as part of the news broadcast, including dialogue. The evaluation markers themselves will be studied according to the explained methodology, by correlating them to (i) metadata about the context and (ii) linguistic phenomena characterising immediacy-language: repetition, deixis (personal, spatial, and temporal), a freer choice of tense and right- /left-dislocation. Results: Our results indicate that evaluation markers are more dominantly present in those weather reports inclining towards immediacy-language. Based on the methodology established above, we have gained more insight into the working of evaluation markers in the domain-sensitive text genre of (television) weather reports. For future research, it will be interesting to determine whether said evaluation markers are also typical for immediacy-language-oriented in other domain-sensitive discourses.

Keywords: corpus-based linguistics, evaluation markers, language of immediacy and distance, weather reports

Procedia PDF Downloads 218
9 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media

Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca

Abstract:

Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.

Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks

Procedia PDF Downloads 195
8 Immunoliposome-Mediated Drug Delivery to Plasmodium-Infected and Non-Infected Red Blood Cells as a Dual Therapeutic/Prophylactic Antimalarial Strategy

Authors: Ernest Moles, Patricia Urbán, María Belén Jiménez-Díaz, Sara Viera-Morilla, Iñigo Angulo-Barturen, Maria Antònia Busquets, Xavier Fernàndez-Busquets

Abstract:

Bearing in mind the absence of an effective vaccine against malaria and its severe clinical manifestations causing nearly half a million deaths every year, this disease represents nowadays a major threat to life. Besides, the basic rationale followed by currently marketed antimalarial approaches is based on the administration of drugs on their own, promoting the emergence of drug-resistant parasites owing to the limitation in delivering drug payloads into the parasitized erythrocyte high enough to kill the intracellular pathogen while minimizing the risk of causing toxic side effects to the patient. Such dichotomy has been successfully addressed through the specific delivery of immunoliposome (iLP)-encapsulated antimalarials to Plasmodium falciparum-infected red blood cells (pRBCs). Unfortunately, this strategy has not progressed towards clinical applications, whereas in vitro assays rarely reach drug efficacy improvements above 10-fold. Here, we show that encapsulation efficiencies reaching >96% can be achieved for the weakly basic drugs chloroquine (CQ) and primaquine using the pH gradient active loading method in liposomes composed of neutrally charged, saturated phospholipids. Targeting antibodies are best conjugated through their primary amino groups, adjusting chemical crosslinker concentration to retain significant antigen recognition. Antigens from non-parasitized RBCs have also been considered as targets for the intracellular delivery of drugs not affecting the erythrocytic metabolism. Using this strategy, we have obtained unprecedented nanocarrier targeting to early intraerythrocytic stages of the malaria parasite for which there is a lack of specific extracellular molecular tags. Polyethylene glycol-coated liposomes conjugated with monoclonal antibodies specific for the erythrocyte surface protein glycophorin A (anti-GPA iLP) were capable of targeting 100% RBCs and pRBCs at the low concentration of 0.5 μM total lipid in the culture, with >95% of added iLPs retained into the cells. When exposed for only 15 min to P. falciparum in vitro cultures synchronized at early stages, free CQ had no significant effect over parasite viability up to 200 nM drug, whereas iLP-encapsulated 50 nM CQ completely arrested its growth. Furthermore, when assayed in vivo in P. falciparum-infected humanized mice, anti-GPA iLPs cleared the pathogen below detectable levels at a CQ dose of 0.5 mg/kg. In comparison, free CQ administered at 1.75 mg/kg was, at most, 40-fold less efficient. Our data suggest that this significant improvement in drug antimalarial efficacy is in part due to a prophylactic effect of CQ found by the pathogen in its host cell right at the very moment of invasion.

Keywords: immunoliposomal nanoparticles, malaria, prophylactic-therapeutic polyvalent activity, targeted drug delivery

Procedia PDF Downloads 374
7 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 163
6 Political Communication in Twitter Interactions between Government, News Media and Citizens in Mexico

Authors: Jorge Cortés, Alejandra Martínez, Carlos Pérez, Anaid Simón

Abstract:

The presence of government, news media, and general citizenry in social media allows considering interactions between them as a form of political communication (i.e. the public exchange of contradictory discourses about politics). Twitter’s asymmetrical following model (users can follow, mention or reply to other users that do not follow them) could foster alternative democratic practices and have an impact on Mexican political culture, which has been marked by a lack of direct communication channels between these actors. The research aim is to assess Twitter’s role in political communication practices through the analysis of interaction dynamics between government, news media, and citizens by extracting and visualizing data from Twitter’s API to observe general behavior patterns. The hypothesis is that regardless the fact that Twitter’s features enable direct and horizontal interactions between actors, users repeat traditional dynamics of interaction, without taking full advantage of the possibilities of this medium. Through an interdisciplinary team including Communication Strategies, Information Design, and Interaction Systems, the activity on Twitter generated by the controversy over the presence of Uber in Mexico City was analysed; an issue of public interest, involving aspects such as public opinion, economic interests and a legal dimension. This research includes techniques from social network analysis (SNA), a methodological approach focused on the comprehension of the relationships between actors through the visual representation and measurement of network characteristics. The analysis of the Uber event comprised data extraction, data categorization, corpus construction, corpus visualization and analysis. On the recovery stage TAGS, a Google Sheet template, was used to extract tweets that included the hashtags #UberSeQueda and #UberSeVa, posts containing the string Uber and tweets directed to @uber_mx. Using scripts written in Python, the data was filtered, discarding tweets with no interaction (replies, retweets or mentions) and locations outside of México. Considerations regarding bots and the omission of anecdotal posts were also taken into account. The utility of graphs to observe interactions of political communication in general was confirmed by the analysis of visualizations generated with programs such as Gephi and NodeXL. However, some aspects require improvements to obtain more useful visual representations for this type of research. For example, link¬crossings complicates following the direction of an interaction forcing users to manipulate the graph to see it clearly. It was concluded that some practices prevalent in political communication in Mexico are replicated in Twitter. Media actors tend to group together instead of interact with others. The political system tends to tweet as an advertising strategy rather than to generate dialogue. However, some actors were identified as bridges establishing communication between the three spheres, generating a more democratic exercise and taking advantage of Twitter’s possibilities. Although interactions in Twitter could become an alternative to political communication, this potential depends on the intentions of the participants and to what extent they are aiming for collaborative and direct communications. Further research is needed to get a deeper understanding on the political behavior of Twitter users and the possibilities of SNA for its analysis.

Keywords: interaction, political communication, social network analysis, Twitter

Procedia PDF Downloads 221
5 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 121
4 Use of Pheromones, Active Surveillance and Treated Cattle to Prevent the Establishment of the Tropical Bont Tick in Puerto Rico and the Americas

Authors: Robert Miller, Fred Soltero, Sandra Allan, Denise Bonilla

Abstract:

The Tropical Bont Tick (TBT), Amblyomma variegatum, was introduced to the Caribbean in the mid-1700s. Since it has spread throughout the Caribbean dispersed by cattle egrets (Bubulcus ibis). Tropical Bont Ticks vector many pathogens to livestock and humans. However, only the livestock diseases heartwater, Ehrlichia (Cowdria) ruminantium, and dermatophilosis, Dermatophilus congolensis, are associated with TBT in the Caribbean. African tick bite fever (Rickettsia africae) is widespread in Caribbean TBT but human cases are rare. The Caribbean Amblyomma Programme (CAP) was an effort led by the Food and Agricultural Organization to eradicate TBTs from participating islands. This 10-year effort successfully eradicated TBT from many islands. However, most are reinfested since its termination. Pheromone technology has been developed to aid in TBT control. Although not part of the CAP treatment scheme, this research established that pheromones in combination with pesticide greatly improves treatment efficiencies. Additionally, pheromone combined with CO₂ traps greatly improves active surveillance success. St. Croix has a history of TBT outbreaks. Passive surveillance detected outbreaks in 2016 and in May of 2021. Surveillance efforts are underway to determine the extent of TBT on St Croix. Puerto Rico is the next island in the archipelago and is at a greater risk of re-infestation due to active outbreaks in St Croix. Tropical Bont Ticks were last detected in Puerto Rico in the 1980s. The infestation started on the small Puerto Rican island of Vieques, the closest landmass to St Croix, and spread to the main island through cattle movements. This infestation was eradicated with the help of the Tropical Cattle Tick (TCT), Rhipicephalus (Boophilus) microplus, eradication program. At the time, large percentages of Puerto Rican cattle were treated for ticks along with the necessary material and manpower mobilized for the effort. Therefore, a shift of focus from the TCT to TBT prevented its establishment in Puerto Rico. Currently, no large-scale treatment of TCTs occurs in Puerto Rico. Therefore, the risk of TBT establishment is now greater than it was in the 1980s. From Puerto Rico, the risk of TBT movement to the American continent increases significantly. The establishment of TBTs in the Americas would cause $1.2 billion USD in losses to the livestock industry per year. The USDA Agricultural Research Service recently worked with the USDA Animal Health Inspection Service and the Puerto Rican Department of Agriculture to modernize the management of the TCT. This modernized program uses safer pesticides and has successfully been used to eradicate pesticide-susceptible and -resistant ticks throughout the island. The objective of this work is to prevent the infestation of Puerto Rico by TBTs by combining the current TCT management efforts with TBT surveillance in Vieques. The combined effort is designed to eradicate TCT from Vieques while using the treated cattle as trap animals for TBT using pheromone impregnated tail tags attached to treated animals. Additionally, active surveillance using CO₂-baited traps combined with pheromone will be used to actively survey the environment for free-living TBT. Knowledge gained will inform TBT control efforts in St. Croix.

Keywords: Amblyomma variegatum, caribbean, eradication, Rhipicephalus (boophilus) microplus, pheromone

Procedia PDF Downloads 173
3 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems

Authors: Sabina Auhunas

Abstract:

These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.

Keywords: content analysis, COVID-19, scientometrics, text mining

Procedia PDF Downloads 113
2 Voices of Dissent: Case Study of a Digital Archive of Testimonies of Political Oppression

Authors: Andrea Scapolo, Zaya Rustamova, Arturo Matute Castro

Abstract:

The “Voices in Dissent” initiative aims at collecting and making available in a digital format, testimonies, letters, and other narratives produced by victims of political oppression from different geographical spaces across the Atlantic. By recovering silenced voices behind the official narratives, this open-access online database will provide indispensable tools for rewriting the history of authoritarian regimes from the margins as memory debates continue to provoke controversy among academic and popular transnational circles. In providing an extensive database of non-hegemonic discourses in a variety of political and social contexts, the project will complement the existing European and Latin-American studies, and invite further interdisciplinary and trans-national research. This digital resource will be available to academic communities and the general audience and will be organized geographically and chronologically. “Voices in Dissent” will offer a first comprehensive study of these personal accounts of persecution and repression against determined historical backgrounds and their impact on collective memory formation in contemporary societies. The digitalization of these texts will allow to run metadata analyses and adopt comparatist approaches for a broad range of research endeavors. Most of the testimonies included in our archive are testimonies of trauma: the trauma of exile, imprisonment, torture, humiliation, censorship. The research on trauma has now reached critical mass and offers a broad spectrum of critical perspectives. By putting together testimonies from different geographical and historical contexts, our project will provide readers and scholars with an extraordinary opportunity to investigate how culture shapes individual and collective memories and provides or denies resources to make sense and cope with the trauma. For scholars dealing with the epistemological and rhetorical analysis of testimonies, an online open-access archive will prove particularly beneficial to test theories on truth status and the formation of belief as well as to study the articulation of discourse. An important aspect of this project is also its pedagogical applications since it will contribute to the creation of Open Educational Resources (OER) to support students and educators worldwide. Through collaborations with our Library System, the archive will form part of the Digital Commons database. The texts collected in this online archive will be made available in the original languages as well as in English translation. They will be accompanied by a critical apparatus that will contextualize them historically by providing relevant background information and bibliographical references. All these materials can serve as a springboard for a broad variety of educational projects and classroom activities. They can also be used to design specific content courses or modules. In conclusion, the desirable outcomes of the “Voices in Dissent” project are: 1. the collections and digitalization of political dissent testimonies; 2. the building of a network of scholars, educators, and learners involved in the design, development, and sustainability of the digital archive; 3. the integration of the content of the archive in both research and teaching endeavors, such as publication of scholarly articles, design of new upper-level courses, and integration of the materials in existing courses.

Keywords: digital archive, dissent, open educational resources, testimonies, transatlantic studies

Procedia PDF Downloads 105
1 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe

Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira

Abstract:

Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.

Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust

Procedia PDF Downloads 265