Search results for: artefact
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21

Search results for: artefact

21 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement

Procedia PDF Downloads 252
20 A Programming Assessment Software Artefact Enhanced with the Help of Learners

Authors: Romeo A. Botes, Imelda Smit

Abstract:

The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.

Keywords: programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method

Procedia PDF Downloads 273
19 Metamodel for Artefacts in Service Engineering Analysis and Design

Authors: Purnomo Yustianto, Robin Doss

Abstract:

As a process of developing a service system, the term ‘service engineering’ evolves in scope and definition. To achieve an integrated understanding of the process, a general framework and an ontology are required. This paper extends a previously built service engineering framework by exploring metamodels for the framework artefacts based on a foundational ontology and a metamodel landscape. The first part of this paper presents a correlation map between the proposed framework with the ontology as a form of evaluation for the conceptual coverage of the framework. The mapping also serves to characterize the artefacts to be produced for each activity in the framework. The second part describes potential metamodels to be used, from the metamodel landscape, as alternative formats of the framework artefacts. The results suggest that the framework sufficiently covers the ontological concepts, both from general service context and software service context. The metamodel exploration enriches the suggested artefact format from the original eighteen formats to thirty metamodel alternatives.

Keywords: artefact, framework, service, metamodel

Procedia PDF Downloads 168
18 Exploring the ‘Many Worlds’ Interpretation in Both a Philosophical and Creative Literary Framework

Authors: Jane Larkin

Abstract:

Combining elements of philosophy, science, and creative writing, this investigation explores how a philosophically structured science-fiction novel can challenge the theory of linearity and singularity of time through the ‘many worlds’ theory. This concept is addressed through the creation of a research exegesis and accompanying creative artefact, designed to be read in conjunction with each other in an explorative, interwoven manner. Research undertaken into scientific concepts, such as the ‘many worlds’ interpretation of quantum mechanics and diverse philosophers and their ideologies on time, is embodied in an original science-fiction narrative titled, It Goes On. The five frames that make up the creative artefact are enhanced not only by five leading philosophers and their philosophies on time but by an appreciation of the research, which comes first in the paper. Research into traditional approaches to storytelling is creatively and innovatively inverted in several ways, thus challenging the singularity and linearity of time. Further nonconventional approaches to literary techniques include an abstract narrator, embodied by time, a concept, and a figure in the text, whose voice and vantage point in relation to death furthers the unreliability of the notion of time. These further challenge individuals’ understanding of complex scientific and philosophical views in a variety of ways. The science-fiction genre is essential when considering the speculative nature of It Goes On, which deals with parallel realities and is a fantastical exploration of human ingenuity in plausible futures. Therefore, this paper documents the research-led methodology used to create It Goes On, the application of the ‘many worlds’ theory within a framed narrative, and the many innovative techniques used to contribute new knowledge in a variety of fields.

Keywords: time, many-worlds theory, Heideggerian philosophy, framed narrative

Procedia PDF Downloads 47
17 An Unusual Cause of Electrocardiographic Artefact: Patient's Warming Blanket

Authors: Sanjay Dhiraaj, Puneet Goyal, Aditya Kapoor, Gaurav Misra

Abstract:

In electrocardiography, an ECG artefact is used to indicate something that is not heart-made. Although technological advancements have produced monitors with the potential of providing accurate information and reliable heart rate alarms, despite this, interference of the displayed electrocardiogram still occurs. These interferences can be from the various electrical gadgets present in the operating room or electrical signals from other parts of the body. Artefacts may also occur due to poor electrode contact with the body or due to machine malfunction. Knowing these artefacts is of utmost importance so as to avoid unnecessary and unwarranted diagnostic as well as interventional procedures. We report a case of ECG artefacts occurring due to patient warming blanket and its consequences. A 20-year-old male with a preoperative diagnosis of exstrophy epispadias complex was posted for surgery under epidural and general anaesthesia. Just after endotracheal intubation, we observed nonspecific ECG changes on the monitor. At a first glance, the monitor strip revealed broad QRs complexes suggesting a ventricular bigeminal rhythm. Closer analysis revealed these to be artefacts because although the complexes were looking broad on the first glance there was clear presence of normal sinus complexes which were immediately followed by 'broad complexes' or artefacts produced by some device or connection. These broad complexes were labeled as artefacts as they were originating in the absolute refractory period of the previous normal sinus beat. It would be physiologically impossible for the myocardium to depolarize so rapidly as to produce a second QRS complex. A search for the possible reason for the artefacts was made and after deepening the plane of anaesthesia, ruling out any possible electrolyte abnormalities, checking of ECG leads and its connections, changing monitors, checking all other monitoring connections, checking for proper grounding of anaesthesia machine and OT table, we found that after switching off the patient’s warming apparatus the rhythm returned to a normal sinus one and the 'broad complexes' or artefacts disappeared. As misdiagnosis of ECG artefacts may subject patients to unnecessary diagnostic and therapeutic interventions so a thorough knowledge of the patient and monitors allow for a quick interpretation and resolution of the problem.

Keywords: ECG artefacts, patient warming blanket, peri-operative arrhythmias, mobile messaging services

Procedia PDF Downloads 240
16 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 165
15 Video Heart Rate Measurement for the Detection of Trauma-Related Stress States

Authors: Jarek Krajewski, David Daxberger, Luzi Beyer

Abstract:

Finding objective and non-intrusive measurements of emotional and psychopathological states (e.g., post-traumatic stress disorder, PTSD) is an important challenge. Thus, the proposed approach here uses Photoplethysmographic imaging (PPGI) applying facial RGB Cam videos to estimate heart rate levels. A pipeline for the signal processing of the raw image has been proposed containing different preprocessing approaches, e.g., Independent Component Analysis, Non-negative Matrix factorization, and various other artefact correction approaches. Under resting and constant light conditions, we reached a sensitivity of 84% for pulse peak detection. The results indicate that PPGI can be a suitable solution for providing heart rate data derived from these indirectly post-traumatic stress states.

Keywords: heart rate, PTSD, PPGI, stress, preprocessing

Procedia PDF Downloads 99
14 Underwater Remotely Operated Vehicle (ROV) Exploration

Authors: M. S. Sukumar

Abstract:

Our objective is to develop a full-fledged system for exploring and studying nature of fossils and to extend this to underwater archaeology and mineral mapping. This includes aerial surveying, imaging techniques, artefact extraction and spectrum analysing techniques. These techniques help in regular monitoring of fossils and also the sensing system. The ROV was designed to complete several tasks which simulate collecting data and samples. Given the time constraints, the ROV was engineered for efficiency and speed in performing tasks. Its other major design consideration was modularity, allowing the team to distribute the building process, to easily test systems as they were completed and troubleshoot and replace systems as necessary. Our design itself had several challenges of on-board waterproofed sensor mounting, waterproofing of motors, ROV stability criteria, camera mounting and hydrophone sound acquisition.

Keywords: remotely operated vehicle (ROV) dragonair, underwater archaeology, full-fledged system, aerial imaging and detection

Procedia PDF Downloads 208
13 New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections

Authors: Lise Paule Labéjof, Raíza Sales Pereira Bizerra, Galileu Barbosa Costa, Thaísa Barros dos Santos

Abstract:

Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.

Keywords: image quality, microscopy research, staining technique, ultra thin section

Procedia PDF Downloads 401
12 Analyse of User Interface Design in Mobile Teaching Apps

Authors: Asma Ashoul

Abstract:

Nowadays, smartphones are playing a major role in our lives, by communicating with family, friends or using them to learn different things in life. Using smartphones to learn and teach today is something common to see in places like schools or colleges. Therefore, thinking about developing an app that teaches Arabic language may help some categories in society to learn a second language. For example, kids under the age of five or older would learn fast by using smartphones. The problem is based on the Arabic language, which is most like to be not used anymore. The developer assumed to develop an app that would help the younger generation on their learning the Arabic language. A research was completed about user interface design to help the developer choose appropriate layouts and designs. Developing the artefact contained different stages. First, analyzing the requirements with the client, which is needed to be developed. Secondly, designing the user interface design based on the literature review. Thirdly, developing and testing the application after it is completed contacting all the tools that have been used. Lastly, evaluation and future recommendation, which contained the overall view about the application followed by the client’s feedback. Gathering the requirements after having client meetings based on the interface design. The project was done following an agile development methodology. Therefore, this methodology helped the developer to manage to finish the work on time.

Keywords: developer, application, interface design, layout, Agile, client

Procedia PDF Downloads 86
11 Didactical and Semiotic Affordance of GeoGebra in a Productive Mathematical Discourse

Authors: Isaac Benning

Abstract:

Using technology to expand the learning space is critical for a productive mathematical discourse. This is a case study of two teachers who developed and enacted GeoGebra-based mathematics lessons following their engagement in a two-year professional development. The didactical and semiotic affordance of GeoGebra in widening the learning space for a productive mathematical discourse was explored. The approach of thematic analysis was used for lesson artefact, lesson observation, and interview data. The results indicated that constructing tools in GeoGebra provided a didactical milieu where students used them to explore mathematical concepts with little or no support from their teacher. The prompt feedback from the GeoGebra motivated students to practice mathematical concepts repeatedly in which they privately rethink their solutions before comparing their answers with that of their colleagues. The constructing tools enhanced self-discovery, team spirit, and dialogue among students. With regards to the semiotic construct, the tools widened the physical and psychological atmosphere of the classroom by providing animations that served as virtual concrete to enhance the recording, manipulation, testing of a mathematical idea, construction, and interpretation of geometric objects. These findings advance the discussion of widening the classroom for a productive mathematical discourse within the context of the mathematics curriculum of Ghana and similar Sub-Saharan African countries.

Keywords: GeoGebra, theory of didactical situation, semiotic mediation, mathematics laboratory, mathematical discussion

Procedia PDF Downloads 89
10 A Domain Specific Modeling Language Semantic Model for Artefact Orientation

Authors: Bunakiye R. Japheth, Ogude U. Cyril

Abstract:

Since the process of transforming user requirements to modeling constructs are not very well supported by domain-specific frameworks, it became necessary to integrate domain requirements with the specific architectures to achieve an integrated customizable solutions space via artifact orientation. Domain-specific modeling language specifications of model-driven engineering technologies focus more on requirements within a particular domain, which can be tailored to aid the domain expert in expressing domain concepts effectively. Modeling processes through domain-specific language formalisms are highly volatile due to dependencies on domain concepts or used process models. A capable solution is given by artifact orientation that stresses on the results rather than expressing a strict dependence on complicated platforms for model creation and development. Based on this premise, domain-specific methods for producing artifacts without having to take into account the complexity and variability of platforms for model definitions can be integrated to support customizable development. In this paper, we discuss methods for the integration capabilities and necessities within a common structure and semantics that contribute a metamodel for artifact-orientation, which leads to a reusable software layer with concrete syntax capable of determining design intents from domain expert. These concepts forming the language formalism are established from models explained within the oil and gas pipelines industry.

Keywords: control process, metrics of engineering, structured abstraction, semantic model

Procedia PDF Downloads 111
9 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 23
8 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting - The Wicked Method

Authors: Sinead Impey, Damon Berry, Selma Furtado, Miriam Galvin, Loretto Grogan, Orla Hardiman, Lucy Hederman, Mark Heverin, Vincent Wade, Linda Douris, Declan O'Sullivan, Gaye Stephens

Abstract:

Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.

Keywords: healthcare, knowledge acquisition, maximal data sets, action design science

Procedia PDF Downloads 269
7 A Pragmatic Approach of Memes Created in Relation to the COVID-19 Pandemic

Authors: Alexandra-Monica Toma

Abstract:

Internet memes are an element of computer mediated communication and an important part of online culture that combines text and image in order to generate meaning. This term coined by Richard Dawkings refers to more than a mere way to briefly communicate ideas or emotions, thus naming a complex and an intensely perpetuated phenomenon in the virtual environment. This paper approaches memes as a cultural artefact and a virtual trope that mirrors societal concerns and issues, and analyses the pragmatics of their use. Memes have to be analysed in series, usually relating to some image macros, which is proof of the interplay between imitation and creativity in the memes’ writing process. We believe that their potential to become viral relates to three key elements: adaptation to context, reference to a successful meme series, and humour (jokes, irony, sarcasm), with various pragmatic functions. The study also uses the concept of multimodality and stresses how the memes’ text interacts with the image, discussing three types of relations: symmetry, amplification, and contradiction. Moreover, the paper proves that memes could be employed as speech acts with illocutionary force, when the interaction between text and image is enriched through the connection to a specific situation. The features mentioned above are analysed in a corpus that consists of memes related to the COVID-19 pandemic. This corpus shows them to be highly adaptable to context, which helps build the feeling of connection and belonging in an otherwise tremendously fragmented world. Some of them are created based on well-known image macros, and their humour results from an intricate dialogue between texts and contexts. Memes created in relation to the COVID-19 pandemic can be considered speech acts and are often used as such, as proven in the paper. Consequently, this paper tackles the key features of memes, makes a thorough analysis of the memes sociocultural, linguistic, and situational context, and emphasizes their intertextuality, with special accent on their illocutionary potential.

Keywords: context, memes, multimodality, speech acts

Procedia PDF Downloads 162
6 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 453
5 Interior Architecture in the Anthropocene: Engaging the Subnature through the Intensification of Body-Surface Interaction

Authors: Verarisa Ujung

Abstract:

The Anthropocene – as scientists define as a new geological epoch where human intervention has the dominant influence on the geological, atmospheric, and ecological processes challenges the contemporary discourse in architecture and interior. The dominant influence characterises the incapability to distinguish the notion of nature, subnature, human and non-human. Consequently, living in the Anthropocene demands sensitivity and responsiveness to heighten our sense of the rhythm of transformation and recognition of our environment as a product of natural, social and historical processes. The notion of subnature is particularly emphasised in this paper to investigate the poetic sense of living with subnature. It could be associated with the critical tool for exploring the aesthetic and programmatic implications of subnature on interiority. The ephemeral immaterial attached to subnature promotes the sense of atmospheric delineation of interiority, the very inner significance of body-surface interaction, which central to interior architecture discourse. This would then reflect human’s activities; examine the transformative change, the architectural motion and the traces that left between moments. In this way, engaging the notion of subnature enable us to better understand the critical subject on interiority and might provide an in-depth study on interior architecture. Incorporating the exploration on the form, materiality, and pattern of subnature, this research seeks to grasp the inner significance of micro to macro approaches so that the future of interior might be compelled to depend more on the investigation and development of responsive environment. To reflect upon the form, materiality and intensity of subnature that specifically characterized by the natural, social and historical processes, this research examines a volcanic land, White Island/Whakaari, New Zealand as the chosen site of investigation. Emitting various forms and intensities of subnatures - smokes, mud, sulphur gas, this volcanic land also open to the new inhabitation within the sulphur factory ruins that reflects human’s past occupation. In this way, temporal and natural selected manifestations of materiality, artefact, and performance can be traced out and might reveal the meaningful relations among space, inhabitation, and well-being of inhabitants in the Anthropocene.

Keywords: anthropocene, body, intensification, intensity, interior architecture, subnature, surface

Procedia PDF Downloads 145
4 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 304
3 Animated Poetry-Film: Poetry in Action

Authors: Linette van der Merwe

Abstract:

It is known that visual artists, performing artists, and literary artists have inspired each other since time immemorial. The enduring, symbiotic relationship between the various art genres is evident where words, colours, lines, and sounds act as metaphors, a physical separation of the transcendental reality of art. Simonides of Keos (c. 556-468 BC) confirmed this, stating that a poem is a talking picture, or, in a more modern expression, a picture is worth a thousand words. It can be seen as an ancient relationship, originating from the epigram (tombstone or artefact inscriptions), the carmen figuratum (figure poem), and the ekphrasis (a description in the form of a poem of a work of art). Visual artists, including Michelangelo, Leonardo da Vinci, and Goethe, wrote poems and songs. Goya, Degas, and Picasso are famous for their works of art and for trying their hands at poetry. Afrikaans writers whose fine art is often published together with their writing, as in the case of Andries Bezuidenhout, Breyten Breytenbach, Sheila Cussons, Hennie Meyer, Carina Stander, and Johan van Wyk, among others, are not a strange phenomenon either. Imitating one art form into another art form is a form of translation, transposition, contemplation, and discovery of artistic impressions, showing parallel interpretations rather than physical comparison. It is especially about the harmony that exists between the different art genres, i.e., a poem that describes a painting or a visual text that portrays a poem that becomes a translation, interpretation, and rediscovery of the verbal text, or rather, from the word text to the image text. Poetry-film, as a form of such a translation of the word text into an image text, can be considered a hybrid, transdisciplinary art form that connects poetry and film. Poetry-film is regarded as an intertwined entity of word, sound, and visual image. It is an attempt to transpose and transform a poem into a new artwork that makes the poem more accessible to people who are not necessarily open to the written word and will, in effect, attract a larger audience to a genre that usually has a limited market. Poetry-film is considered a creative expression of an inverted ekphrastic inspiration, a visual description, interpretation, and expression of a poem. Research also emphasises that animated poetry-film is not widely regarded as a genre of anything and is thus severely under-theorized. This paper will focus on Afrikaans animated poetry-films as a multimodal transposition of a poem text to an animated poetry film, with specific reference to animated poetry-films in Filmverse I (2014) and Filmverse II (2016).

Keywords: poetry film, animated poetry film, poetic metaphor, conceptual metaphor, monomodal metaphor, multimodal metaphor, semiotic metaphor, multimodality, metaphor analysis, target domain, source domain

Procedia PDF Downloads 32
2 The Late Bronze Age Archeometallurgy of Copper in Mountainous Colchis (Lechkhumi), Georgia

Authors: Nino Sulava, Brian Gilmour, Nana Rezesidze, Tamar Beridze, Rusudan Chagelishvili

Abstract:

Studies of ancient metallurgy are a subject of worldwide current interest. Georgia with its famous early metalworking traditions is one of the central parts of in the Caucasus region. The aim of the present study is to introduce the results of archaeometallurgical investigations being undertaken in the mountain region of Colchis, Lechkhumi (the Tsageri Municipality of western Georgia) and establish their place in the existing archaeological context. Lechkhumi (one of the historic provinces of Georgia known from Georgian, Greek, Byzantine and Armenian written sources as Lechkhumi/Skvimnia/Takveri) is the part of the Colchian mountain area. It is one of the important but little known centres of prehistoric metallurgy in the Caucasian region and of Colchian Bronze Age culture. Reconnaissance archaeological expeditions (2011-2015) revealed significant prehistoric metallurgical sites in Lechkhumi. Sites located in the vicinity of Dogurashi Village (Tsageri Municipality) have become the target area for archaeological excavations. During archaeological excavations conducted in 2016-2018 two archaeometallurgical sites – Dogurashi I and Dogurashi II were investigated. As a result of an interdisciplinary (archaeological, geological and geophysical) survey, it has been established that at both prehistoric Dogurashi mountain sites, it was copper that was being smelted and the ore sources are likely to be of local origin. Radiocarbon dating results confirm they were operating between about the 13th and 9th century BC. More recently another similar site has been identified in this area (Dogurashi III), and this is about to undergo detailed investigation. Other prehistoric metallurgical sites are being located and investigated in the Lechkhumi region as well as chance archaeological finds (often in hoards) – copper ingots, metallurgical production debris, slag, fragments of crucibles, tuyeres (air delivery pipes), furnace wall fragments and other related waste debris. Other chance finds being investigated are the many copper, bronze and (some) iron artefacts that have been found over many years. These include copper ingots, copper, bronze and iron artefacts such as tools, jewelry, and decorative items. These show the important but little known or understood the role of Lechkhumi in the late Bronze Age culture of Colchis. It would seem that mining and metallurgical manufacture form part of the local agricultural yearly lifecycle. Colchian ceramics have been found and also evidence for artefact production, small stone mould fragments and encrusted material from the casting of a fylfot (swastika) form of Colchian bronze buckle found in the vicinities of the early settlements of Tskheta and Dekhviri. Excavation and investigation of previously unknown archaeometallurgical sites in Lechkhumi will contribute significantly to the knowledge and understanding of prehistoric Colchian metallurgy in western Georgia (Adjara, Guria, Samegrelo, and Svaneti) and will reveal the importance of this region in the study of ancient metallurgy in Georgia and the Caucasus. Acknowledgment: This work has been supported by the Shota Rustaveli National Science Foundation (grant FR # 217128).

Keywords: archaeometallurgy, Colchis, copper, Lechkhumi

Procedia PDF Downloads 109
1 Provotyping Futures Through Design

Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli

Abstract:

Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.

Keywords: speculative design, provotypes, design knowledge, political theory

Procedia PDF Downloads 104