Search results for: memory dumping
131 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia
Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic
Abstract:
This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro
Procedia PDF Downloads 252130 Building Exoskeletons for Seismic Retrofitting
Authors: Giuliana Scuderi, Patrick Teuffel
Abstract:
The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting
Procedia PDF Downloads 420129 Into the Dreamweaver’s World of the Mandaya and the Tboli: From Folklore to the Woven Fabric
Authors: Genevieve Jorolan Quintero
Abstract:
In Mindanao, the southern island of the Philippines, two provinces, Davao Oriental and Tboli of South Cotabato, respectively, are homes to indigenous communities known for their dream weavers. Davao Oriental is home to the Mandaya, while Lake Sebu is home to the Tboli. The dream weavers are mostly women who have continued the tradition of weaving, a spiritual practice of handicraft embodying the beliefs of the community. It is believed that a weaver is guided by the Tagamaling, or the nature spirit in Mandaya mythology, and Fu Dalu, or the spirit of the abaca among the Tboli. In the dream, the Tagamaling or Fu Dalu reveals to the weaver the design or the pattern of the dagmay as the abaca woven cloth is called among the Mandaya and the tnalak among the Tboli. The weaver then undertakes the production of this nature-spirit-inspired fabric based on her memory of the dream. This interaction between the spirit world and the human world inspired the theme of the short story with the title Loom of Dreams, published in 2015 by Kritika Kultura, an international peer-reviewed journal of language and literary/cultural studies of the Ateneo de Manila University in the Philippines. In Lake Sebu, a collection of the legendary tnalak with various designs is preserved by the cultural advocate and tnalak collector Reden S. Ulo. About a hundred tnalak designs are housed in a mini museum. The paper discusses how the dagmay and the tnalak of the two Philippine indigenous communities, the Mandaya and the Tboli, embody their folklore and cultural heritage. The specific objectives are: 1. To describe the role of the dreamweavers among the Mandaya and Tboli communities in the Philippines; 2. To analyse how folklore influences the designs on the woven fabric, the dagmay, and the tnalak, and 3. To discuss how dream-weaving helps preserve culture legacy. Ethnography was used in the conduct of this research. Specifically, the following data collection methods were done: 1. a series of visits to the Mandaya and Tboli communities; 2. face-to-face interviews with the respondents from the communities, and 3. the recording of the interviews with the knowledge-bearers and material culture keepers from both communities, the narratives of which were used as a basis for the data analysis. The influence of folklore in the culture and the arts of the indigenous communities is significantly evident in the designs of the dagmay and the tnalak. As the dream weavers continue to weave the dagmay and the tnalak, this cultural legacy will continue to prosper and be preserved for posterity.Keywords: dreamweaver's, Mandaya, mindanao, Philippine folklore, Tboli
Procedia PDF Downloads 101128 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 125127 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception
Authors: Păcurar Diana Istina
Abstract:
The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception
Procedia PDF Downloads 91126 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing
Authors: Tolulope Aremu
Abstract:
This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving
Procedia PDF Downloads 34125 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 75124 Microglia Activation in Animal Model of Schizophrenia
Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg
Abstract:
Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia
Procedia PDF Downloads 417123 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning
Authors: Jasmin Cowin, Amany Alkhayat
Abstract:
This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality
Procedia PDF Downloads 241122 Development of Taiwanese Sign Language Receptive Skills Test for Deaf Children
Authors: Hsiu Tan Liu, Chun Jung Liu
Abstract:
It has multiple purposes to develop a sign language receptive skills test. For example, this test can be used to be an important tool for education and to understand the sign language ability of deaf children. There is no available test for these purposes in Taiwan. Through the discussion of experts and the references of standardized Taiwanese Sign Language Receptive Test for adults and adolescents, the frame of Taiwanese Sign Language Receptive Skills Test (TSL-RST) for deaf children was developed, and the items were further designed. After multiple times of pre-trials, discussions and corrections, TSL-RST is finally developed which can be conducted and scored online. There were 33 deaf children who agreed to be tested from all three deaf schools in Taiwan. Through item analysis, the items were picked out that have good discrimination index and fair difficulty index. Moreover, psychometric indexes of reliability and validity were established. Then, derived the regression formula was derived which can predict the sign language receptive skills of deaf children. The main results of this study are as follows. (1). TSL-RST includes three sub-test of vocabulary comprehension, syntax comprehension and paragraph comprehension. There are 21, 20, and 9 items in vocabulary comprehension, syntax comprehension, and paragraph comprehension, respectively. (2). TSL-RST can be conducted individually online. The sign language ability of deaf students can be calculated fast and objectively, so that they can get the feedback and results immediately. This can also contribute to both teaching and research. The most subjects can complete the test within 25 minutes. While the test procedure, they can answer the test questions without relying on their reading ability or memory capacity. (3). The sub-test of the vocabulary comprehension is the easiest one, syntax comprehension is harder than vocabulary comprehension and the paragraph comprehension is the hardest. Each of the three sub-test and the whole test are good in item discrimination index. (4). The psychometric indices are good, including the internal consistency reliability (Cronbach’s α coefficient), test-retest reliability, split-half reliability, and content validity. The sign language ability are significantly related to non-verbal IQ, the teachers’ rating to the students’ sign language ability and students’ self-rating to their own sign language ability. The results showed that the higher grade students have better performance than the lower grade students, and students with deaf parent perform better than those with hearing parent. These results made TLS-RST have great discriminant validity. (5). The predictors of sign language ability of primary deaf students are age and years of starting to learn sign language. The results of this study suggested that TSL-RST can effectively assess deaf student’s sign language ability. This study also proposed a model to develop a sign language tests.Keywords: comprehension test, elementary school, sign language, Taiwan sign language
Procedia PDF Downloads 189121 Meiji Centennial as a Media Event: Ideas for Upcoming Turkish Republic Centennial
Authors: Hasan Topacoglu
Abstract:
The Meiji Restoration was a chain of events that restored Japan in 1868 and considered as the beginning of Japanese Modernization by many scholars. In 1968, to honor its modern incarnation, Japan celebrated Meiji Centennial as one of the biggest Media Events in the country after the World War II. It was celebrated all around the country throughout the year following with a central event in Tokyo. Meanwhile, Japanese scholars started an opposition movement and claimed that Government was using this event to raise nationalism, pointing at Government’s statement on the meaning of Meiji. Most of the scholars, unfortunately, were hooked into the ideological problem of the Government’s way of planning and evaluated it as a failure. However, scholars missed out an important point that apart from the central event in Tokyo, each city planned its own event and celebrated it on a different date, also with a different theme. For example, Kyoto showed a regional characteristic and focused on Kyoto’s own culture, tradition etc., and highlighted a further past than 100 years. This was mainly because some areas/cities had a different ‘memory’ for Meiji Restoration than Tokyo which was reflected through the way they celebrated Meiji Centennial. On the other hand, 2023 will be the year of Turkish Republic Centennial. A year which will be marked by national and maybe even international events. Although an official committee has not been announced yet, The 2023 Vision, a list of goals has been released by the Government to coincide with the centenary of the Republic of Turkey in 2023 and there are some ongoing projects that are planned to be completed by then. By looking at the content of these projects, it is possible to say that Government is aiming to focus on Modernization through the Centennial. However, some of the projects are already showing some interesting characteristics such as the Istanbul New Airport whose design is inspired by Selimiye Mosque’s Islamic-Ottoman figure. It is true that Turkey and Japan have different historical backgrounds and the timeline of the Meiji Restoration and Foundation of Turkish Republic are different. Therefore, a particular comparison between these two events is not justified. However, they may have more in common than we are up to think because, each country marked the start of a new nation conceived on modern principles. For that reason, it is important to understand the similarities or differences between Meiji Centennial and Turkish Republic Centennial as a media event. This study introduces Meiji Centennial as a media event and analyses opposition movement along with the meaning of Meiji Centennial. Additionally, it explains regional characteristic differences and gives Kyoto as an example. Moreover, it introduces some of the ongoing Centennial projects in Turkey and analyses the meaning of the Turkish Republic Centennial through these projects. Without comparing Japan and Turkey, it explains the case of Japan but the discussion centers on deepening our understanding of Centennial as a Media Event and remarks some important aspects for Turkey’s upcoming Centennial events.Keywords: media events, Meiji centennial, the 2023 vision, Turkish republic centennial
Procedia PDF Downloads 332120 Electronic Structure Studies of Mn Doped La₀.₈Bi₀.₂FeO₃ Multiferroic Thin Film Using Near-Edge X-Ray Absorption Fine Structure
Authors: Ghazala Anjum, Farooq Hussain Bhat, Ravi Kumar
Abstract:
Multiferroic materials are vital for new application and memory devices, not only because of the presence of multiple types of domains but also as a result of cross correlation between coexisting forms of magnetic and electrical orders. In spite of wide studies done on multiferroic bulk ceramic materials their realization in thin film form is yet limited due to some crucial problems. During the last few years, special attention has been devoted to synthesis of thin films like of BiFeO₃. As they allow direct integration of the material into the device technology. Therefore owing to the process of exploration of new multiferroic thin films, preparation, and characterization of La₀.₈Bi₀.₂Fe₀.₇Mn₀.₃O₃ (LBFMO3) thin film on LaAlO₃ (LAO) substrate with LaNiO₃ (LNO) being the buffer layer has been done. The fact that all the electrical and magnetic properties are closely related to the electronic structure makes it inevitable to study the electronic structure of system under study. Without the knowledge of this, one may never be sure about the mechanism responsible for different properties exhibited by the thin film. Literature review reveals that studies on change in atomic and the hybridization state in multiferroic samples are still insufficient except few. The technique of x-ray absorption (XAS) has made great strides towards the goal of providing such information. It turns out to be a unique signature to a given material. In this milieu, it is time honoured to have the electronic structure study of the elements present in the LBFMO₃ multiferroic thin film on LAO substrate with buffer layer of LNO synthesized by RF sputtering technique. We report the electronic structure studies of well characterized LBFMO3 multiferroic thin film on LAO substrate with LNO as buffer layer using near-edge X-ray absorption fine structure (NEXAFS). Present exploration has been performed to find out the valence state and crystal field symmetry of ions present in the system. NEXAFS data of O K- edge spectra reveals a slight shift in peak position along with growth in intensities of low energy feature. Studies of Mn L₃,₂- edge spectra indicates the presence of Mn³⁺/Mn⁴⁺ network apart from very small contribution from Mn²⁺ ions in the system that substantiates the magnetic properties exhibited by the thin film. Fe L₃,₂- edge spectra along with spectra of reference compound reveals that Fe ions are present in +3 state. Electronic structure and valence state are found to be in accordance with the magnetic properties exhibited by LBFMO/LNO/LAO thin film.Keywords: magnetic, multiferroic, NEXAFS, x-ray absorption fine structure, XMCD, x-ray magnetic circular dichroism
Procedia PDF Downloads 159119 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations
Authors: Priyanka Bharti
Abstract:
Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.Keywords: cognition, visual, decision making, graphics, recognition
Procedia PDF Downloads 269118 The Degree Project-Course in Swedish Teacher Education – Deliberative and Transformative Perspectives on the Formative Assessment Practice
Authors: Per Blomqvist
Abstract:
The overall aim of this study is to highlight how the degree project-course in teacher education has developed over time at Swedish universities, above all regarding changes in the formative assessment practices in relation to student's opportunities to take part in writing processes that can develop both their independent critical thinking, subject knowledge, and academic writing skills. Theoretically, the study is based on deliberative and transformative perspectives of teaching academic writing in higher education. The deliberative perspective is motivated by the fact that it is the universities and their departments' responsibility to give the students opportunities to develop their academic writing skills, while there is little guidance on how this can be implemented. The transformative perspective is motivated by the fact that education needs to be adapted to the student's prior knowledge and developed in relation to the student group. Given the academisation of education and the new student groups, this is a necessity. The empirical data consists of video recordings of teacher groups' conversations at three Swedish universities. The conversations were conducted as so-called collective remembering interviews, a method to stimulate the participants' memory through social interaction, and focused on addressing issues on how the degree project-course in teacher education has changed over time. Topic analysis was used to analyze the conversations in order to identify common descriptions and expressions among the teachers. The result highlights great similarities in how the degree project-course has changed over time, both from a deliberative and a transformative perspective. The course is characterized by a “strong framing,” where the teachers have great control over the work through detailed instructions for the writing process and detailed templates for the text. This is justified by the fact that the education has been adapted based on the student teachers' lack of prior subject knowledge. The strong framing places high demands on continuous discussions between teachers about, for example, which tools the students have with them and which linguistic and textual tools are offered in the education. The teachers describe that such governance often leads to conflicts between teachers from different departments because reading and writing are always part of cultural contexts and are linked to different knowledge, traditions, and values. The problem that is made visible in this study raises questions about how students' opportunities to develop independence and make critical judgments in academic writing are affected if the writing becomes too controlled and if passing students becomes the main goal of education.Keywords: formative assessment, academic writing, degree project, higher education, deliberative perspective, transformative perspective
Procedia PDF Downloads 65117 The Markers -mm and dämmo in Amharic: Developmental Approach
Authors: Hayat Omar
Abstract:
Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics
Procedia PDF Downloads 134116 A Review of Brain Implant Device: Current Developments and Applications
Authors: Ardiansyah I. Ryan, Ashsholih K. R., Fathurrohman G. R., Kurniadi M. R., Huda P. A
Abstract:
The burden of brain-related disease is very high. There are a lot of brain-related diseases with limited treatment result and thus raise the burden more. The Parkinson Disease (PD), Mental Health Problem, or Paralysis of extremities treatments had risen concern, as the patients for those diseases usually had a low quality of life and low chance to recover fully. There are also many other brain or related neural diseases with the similar condition, mainly the treatments for those conditions are still limited as our understanding of the brain function is insufficient. Brain Implant Technology had given hope to help in treating this condition. In this paper, we examine the current update of the brain implant technology. Neurotechnology is growing very rapidly worldwide. The United States Food and Drug Administration (FDA) has approved the use of Deep Brain Stimulation (DBS) as a brain implant in humans. As for neural implant both the cochlear implant and retinal implant are approved by FDA too. All of them had shown a promising result. DBS worked by stimulating a specific region in the brain with electricity. This device is planted surgically into a very specific region of the brain. This device consists of 3 main parts: Lead (thin wire inserted into the brain), neurostimulator (pacemaker-like device, planted surgically in the chest) and an external controller (to turn on/off the device by patient/programmer). FDA had approved DBS for the treatment of PD, Pain Management, Epilepsy and Obsessive Compulsive Disorder (OCD). The target treatment of DBS in PD is to reduce the tremor and dystonia symptoms. DBS has been showing the promising result in animal and limited human trial for other conditions such as Alzheimer, Mental Health Problem (Major Depression, Tourette Syndrome), etc. Every surgery has risks of complications, although in DBS the chance is very low. DBS itself had a very satisfying result as long as the subject criteria to be implanted this device based on indication and strictly selection. Other than DBS, there are several brain implant devices that still under development. It was included (not limited to) implant to treat paralysis (In Spinal Cord Injury/Amyotrophic Lateral Sclerosis), enhance brain memory, reduce obesity, treat mental health problem and treat epilepsy. The potential of neurotechnology is unlimited. When brain function and brain implant were fully developed, it may be one of the major breakthroughs in human history like when human find ‘fire’ for the first time. Support from every sector for further research is very needed to develop and unveil the true potential of this technology.Keywords: brain implant, deep brain stimulation (DBS), deep brain stimulation, Parkinson
Procedia PDF Downloads 155115 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0
Authors: Harris Niavis, Dimitra Politaki
Abstract:
The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.Keywords: blockchain, data quality, industry4.0, product quality
Procedia PDF Downloads 191114 The Negative Implications of Childhood Obesity and Malnutrition on Cognitive Development
Authors: Stephanie Remedios, Linda Veronica Rios
Abstract:
Background. Pediatric obesity is a serious health problem linked to multiple physical diseases and ailments, including diabetes, heart disease, and joint issues. While research has shown pediatric obesity can bring about an array of physical illnesses, it is less known how such a condition can affect children’s cognitive development. With childhood overweight and obesity prevalence rates on the rise, it is essential to understand the scope of their cognitive consequences. The present review of the literature tested the hypothesis that poor physical health, such as childhood obesity or malnutrition, negatively impacts a child’s cognitive development. Methodology. A systematic review was conducted to determine the relationship between poor physical health and lower cognitive functioning in children ages 4-16. Electronic databases were searched for studies dating back to ten years. The following databases were used: Science Direct, FIU Libraries, and Google Scholar. Inclusion criteria consisted of peer-reviewed academic articles written in English from 2012 to 2022 that analyzed the relationship between childhood malnutrition and obesity on cognitive development. A total of 17,000 articles were obtained, of which 16,987 were excluded for not addressing the cognitive implications exclusively. Of the acquired articles, 13 were retained. Results. Research suggested a significant connection between diet and cognitive development. Both diet and physical activity are strongly correlated with higher cognitive functioning. Cognitive domains explored in this work included learning, memory, attention, inhibition, and impulsivity. IQ scores were also considered objective representations of overall cognitive performance. Studies showed physical activity benefits cognitive development, primarily for executive functioning and language development. Additionally, children suffering from pediatric obesity or malnutrition were found to score 3-10 points lower in IQ scores when compared to healthy, same-aged children. Conclusion. This review provides evidence that the presence of physical activity and overall physical health, including appropriate diet and nutritional intake, has beneficial effects on cognitive outcomes. The primary conclusion from this research is that childhood obesity and malnutrition show detrimental effects on cognitive development in children, primarily with learning outcomes. Assuming childhood obesity and malnutrition rates continue their current trade, it is essential to understand the complete physical and psychological implications of obesity and malnutrition in pediatric populations. Given the limitations encountered through our research, further studies are needed to evaluate the areas of cognition affected during childhood.Keywords: childhood malnutrition, childhood obesity, cognitive development, cognitive functioning
Procedia PDF Downloads 119113 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 133112 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring
Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng
Abstract:
Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring
Procedia PDF Downloads 238111 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 116110 Literature Review on the Controversies and Changes in the Insanity Defense since the Wild Beast Standard in 1723 until the Federal Insanity Defense Reform Act of 1984
Authors: Jane E. Hill
Abstract:
Many variables led to the changes in the insanity defense since the Wild Beast Standard of 1723 until the Federal Insanity Defense Reform Act of 1984. The insanity defense is used in criminal trials and argued that the defendant is ‘not guilty by reason of insanity’ because the individual was unable to distinguish right from wrong during the time they were breaking the law. The issue that surrounds whether or not to use the insanity defense in the criminal court depends on the mental state of the defendant at the time the criminal act was committed. This leads us to the question of did the defendant know right from wrong when they broke the law? In 1723, The Wild Beast Test stated that to be exempted from punishment the individual is totally deprived of their understanding and memory and doth not know what they are doing. The Wild Beast Test became the standard in England for over seventy-five years. In 1800, James Hadfield attempted to assassinate King George III. He only made the attempt because he was having delusional beliefs. The jury and the judge gave a verdict of not guilty. However, to legal confine him; the Criminal Lunatics Act was enacted. Individuals that were deemed as ‘criminal lunatics’ and were given a verdict of not guilty would be taken into custody and not be freed into society. In 1843, the M'Naghten test required that the individual did not know the quality or the wrongfulness of the offense at the time they committed the criminal act(s). Daniel M'Naghten was acquitted on grounds of insanity. The M'Naghten Test is still a modern concept of the insanity defense used in many courts today. The Irresistible Impulse Test was enacted in the United States in 1887. The Irresistible Impulse Test suggested that offenders that could not control their behavior while they were committing a criminal act were not deterrable by the criminal sanctions in place; therefore no purpose would be served by convicting the offender. Due to the criticisms of the latter two contentions, the federal District of Columbia Court of Appeals ruled in 1954 to adopt the ‘product test’ by Sir Isaac Ray for insanity. The Durham Rule also known as the ‘product test’, stated an individual is not criminally responsible if the unlawful act was the product of mental disease or defect. Therefore, the two questions that need to be asked and answered are (1) did the individual have a mental disease or defect at the time they broke the law? and (2) was the criminal act the product of their disease or defect? The Durham courts failed to clearly define ‘mental disease’ or ‘product.’ Therefore, trial courts had difficulty defining the meaning of the terms and the controversy continued until 1972 when the Durham rule was overturned in most places. Therefore, the American Law Institute combined the M'Naghten test with the irresistible impulse test and The United States Congress adopted an insanity test for the federal courts in 1984.Keywords: insanity defense, psychology law, The Federal Insanity Defense Reform Act of 1984, The Wild Beast Standard in 1723
Procedia PDF Downloads 145109 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 170108 Investigation of the IL23R Psoriasis/PsA Susceptibility Locus
Authors: Shraddha Rane, Richard Warren, Stephen Eyre
Abstract:
L-23 is a pro-inflammatory molecule that signals T cells to release cytokines such as IL-17A and IL-22. Psoriasis is driven by a dysregulated immune response, within which IL-23 is now thought to play a key role. Genome-wide association studies (GWAS) have identified a number of genetic risk loci that support the involvement of IL-23 signalling in psoriasis; in particular a robust susceptibility locus at a gene encoding a subunit of the IL-23 receptor (IL23R) (Stuart et al., 2015; Tsoi et al., 2012). The lead psoriasis-associated SNP rs9988642 is located approximately 500 bp downstream of IL23R but is in tight linkage disequilibrium (LD) with a missense SNP rs11209026 (R381Q) within IL23R (r2 = 0.85). The minor (G) allele of rs11209026 is present in approximately 7% of the population and is protective for psoriasis and several other autoimmune diseases including IBD, ankylosing spondylitis, RA and asthma. The psoriasis-associated missense SNP R381Q causes an arginine to glutamine substitution in a region of the IL23R protein between the transmembrane domain and the putative JAK2 binding site in the cytoplasmic portion. This substitution is expected to affect the receptor’s surface localisation or signalling ability, rather than IL23R expression. Recent studies have also identified a psoriatic arthritis (PsA)-specific signal at IL23R; thought to be independent from the psoriasis association (Bowes et al., 2015; Budu-Aggrey et al., 2016). The lead PsA-associated SNP rs12044149 is intronic to IL23R and is in LD with likely causal SNPs intersecting promoter and enhancer marks in memory CD8+ T cells (Budu-Aggrey et al., 2016). It is therefore likely that the PsA-specific SNPs affect IL23R function via a different mechanism compared with the psoriasis-specific SNPs. It could be hypothesised that the risk allele for PsA located within the IL23R promoter causes an increase IL23R expression, relative to the protective allele. An increased expression of IL23R might then lead to an exaggerated immune response. The independent genetic signals identified for psoriasis and PsA in this locus indicate that different mechanisms underlie these two conditions; although likely both affecting the function of IL23R. It is very important to further characterise these mechanisms in order to better understand how the IL-23 receptor and its downstream signalling is affected in both diseases. This will help to determine how psoriasis and PsA patients might differentially respond to therapies, particularly IL-23 biologics. To investigate this further we have developed an in vitro model using CD4 T cells which express either wild type IL23R and IL12Rβ1 or mutant IL23R (R381Q) and IL12Rβ1. Model expressing different isotypes of IL23R is also underway to investigate the effects on IL23R expression. We propose to further investigate the variants for Ps and PsA and characterise key intracellular processes related to the variants.Keywords: IL23R, psoriasis, psoriatic arthritis, SNP
Procedia PDF Downloads 168107 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 130106 Effects of Sexual Activities in Male Athletes Performance
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Massimo Briamo, Giovanni Abalsamo
Abstract:
Most of the benefits of sport come from related physical activity, however, there are secondary psychological positive effects. There are also obvious disadvantages, high tensions related to failure, injuries, eating disorders and burnout. Depressive symptoms and illnesses related to anxiety or stress can be preventable or even simply alleviated through regular activity and exercise. It has been shown that the practice of a sport brings physical benefits, but can also have psychological and spiritual benefits. Reduced performance in male individuals has been linked to sexual activity before competitions in the past. The long-standing debate about the impact of sexual activity on sports performance has been controversial in the mainstream media in recent decades. This salacious topic has generated extensive discussion, although its high-quality data has been limited. Literature has, so far, mainly included subjective assessments from surveys. However, such surveys can be skewed as these assessments are based on individual beliefs, perceptions, and memory. There has been a long discussion over the years but even there objective data has been lacking. One reason behind coaches' bans on sexual activity before sporting events may be the belief that abstinence increases frustration, which in turn is shifted into aggressive behavior toward competitors. However, this assumption is not always valid. In fact, depriving an athlete of a normal activity can cause feelings of guilt and loss of concentration. Sexual activity during training can promote relaxation and positively influence performance. The author concludes that, although there is a need for scientific research in this area, it seems that sexual intercourse does not decrease performance unless it is accompanied by late night socialization, loss of sleep or drinking. Although the effects of sexual engagement on aerobic and strength athletic performance have not been definitively established, most research seems to rule out a direct impact. In order to analyze, as much as possible without bias, whether sexual activity significantly affects an athletic performance or not, we sampled 5 amateur athletes, between 22 and 25 years old and all male. The study was based on the timing of 4 running races of 5 champions. We asked participants to respect guidelines to avoid sexual activity (sex or masturbation) 12 hours before 2 of the 4 competitions, and to practice before the remaining 2 races.In doing so, we were able to compare and analyze the impact of activity and abstinence on performance results. We have come to the conclusion that sexual behavior on athletic performance needs to be better understood, more randomized trials and high-quality controls are strongly needed but available information suggests that sexual activity the day before a race has no negative effects on performance.Keywords: sex, masturbation, male performance, soccer
Procedia PDF Downloads 71105 Nature Manifestations: An Archetypal Analysis of Selected Nightwish Songs
Authors: Suzanne Strauss, Leandi Steenkamp
Abstract:
The Finnish symphonic metal band Nightwish is the brainchild of songwriter and lyricist TuomasHolopainen and the band recorded their first demonstration recording in 1996. The band has since produced nine full-length studio albums, the most recent being the 2020 album Human. :||: Nature., and has reached massive international success. The band is well known for songs about fantasy and escapism and employs many sonic, visual and branding tools and techniques to communicate these constructs to the audience. Among these, is the band’s creation of the so-called “Nightwish world and mythology” with a set of recurring characters and narratives which, in turn, creates a psychological anchor and safe space for Nightwish fans around the globe. Nature and the reverence of nature are central themes in Nightwish’s self-created mythology.Swiss psychologist Carl Jung’s theory of the collective unconscious identified a mysterious reservoir of psychological constructs common to all people, being derived from ancestral memory and experience, common to all humankind, and distinct from the individual’s personal unconscious. Furthermore, he defined archetypes as timeless collective patterns and images that springs forth from the collective unconscious. Archetypes can be actualized when they enter consciousness as images in interaction with the outside world. Archetypal patterns or images can manifest in different ways across world cultures, but follow common patterns, also known as archetypal themes and symbols. The Jungian approach to the psyche places great emphasis on nature, positing a direct link betweenthe concept of wholeness and responsible care for nature and the environment.In our proposed paper, we examine, by means of thematic content analysis, how Nightwish makes use of archetypal themes and symbols referring to nature and the environment in selected songs from their ninth full-length album Human. II Nature. Furthermore, we argue that the longing for and reverence of nature in selected Nightwish songs may serve as a type of “social intervention” and social critique on modern capitalist society. The type of social critique that the band offers is generally connoted intertextually and is not equally explicit in their songs. The band uses a unique combination of escapism, fantasy, and nature narratives to inspire a sense of wonder, enchantment, and magic in the listener. In this way, escapism, fantasy, and nature serve as postmodern frames of reference that aim to “re-enchant” the disenchanted and de-spiritualized. In this way, re-enchantment could also refer to spiritual and/or psychological healing and rebirth.Keywords: archetypes, metal music, nature, Nightwish, social interventions
Procedia PDF Downloads 113104 Pixel Façade: An Idea for Programmable Building Skin
Authors: H. Jamili, S. Shakiba
Abstract:
Today, one of the main concerns of human beings is facing the unpleasant changes of the environment. Buildings are responsible for a significant amount of natural resources consumption and carbon emissions production. In such a situation, this thought comes to mind that changing each building into a phenomenon of benefit to the environment. A change in a way that each building functions as an element that supports the environment, and construction, in addition to answering the need of humans, is encouraged, the way planting a tree is, and it is no longer seen as a threat to alive beings and the planet. Prospect: Today, different ideas of developing materials that can smartly function are realizing. For instance, Programmable Materials, which in different conditions, can respond appropriately to the situation and have features of modification in shape, size, physical properties and restoration, and repair quality. Studies are to progress having this purpose to plan for these materials in a way that they are easily available, and to meet this aim, there is no need to use expensive materials and high technologies. In these cases, physical attributes of materials undertake the role of sensors, wires and actuators then materials will become into robots itself. In fact, we experience robotics without robots. In recent decades, AI and technology advances have dramatically improving the performance of materials. These achievements are a combination of software optimizations and physical productions such as multi-materials 3D printing. These capabilities enable us to program materials in order to change shape, appearance, and physical properties to interact with different situations. nIt is expected that further achievements like Memory Materials and Self-learning Materials are also added to the Smart Materials family, which are affordable, available, and of use for a variety of applications and industries. From the architectural standpoint, the building skin is significantly considered in this research, concerning the noticeable surface area the buildings skin have in urban space. The purpose of this research would be finding a way that the programmable materials be used in building skin with the aim of having an effective and positive interaction. A Pixel Façade would be a solution for programming a building skin. The Pixel Facadeincludes components that contain a series of attributes that help buildings for their needs upon their environmental criteria. A PIXEL contains series of smart materials and digital controllers together. It not only benefits its physical properties, such as control the amount of sunlight and heat, but it enhances building performance by providing a list of features, depending on situation criteria. The features will vary depending on locations and have a different function during the daytime and different seasons. The primary role of a PIXEL FAÇADE can be defined as filtering pollutions (for inside and outside of the buildings) and providing clean energy as well as interacting with other PIXEL FACADES to estimate better reactions.Keywords: building skin, environmental crisis, pixel facade, programmable materials, smart materials
Procedia PDF Downloads 89103 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes
Authors: Sofia Lazareva, Artem Smolentsev
Abstract:
Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state
Procedia PDF Downloads 680102 Problems and Solutions in the Application of ICP-MS for Analysis of Trace Elements in Various Samples
Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Áron Soós, Xénia Vágó, Dávid Andrási
Abstract:
In agriculture for analysis of elements in different food and food raw materials, moreover environmental samples generally flame atomic absorption spectrometers (FAAS), graphite furnace atomic absorption spectrometers (GF-AAS), inductively coupled plasma optical emission spectrometers (ICP-OES) and inductively coupled plasma mass spectrometers (ICP-MS) are routinely applied. An inductively coupled plasma mass spectrometer (ICP-MS) is capable for analysis of 70-80 elements in multielemental mode, from 1-5 cm3 volume of a sample, moreover the detection limits of elements are in µg/kg-ng/kg (ppb-ppt) concentration range. All the analytical instruments have different physical and chemical interfering effects analysing the above types of samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays there is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better (smaller) detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium, arsenic, germanium, vanadium and chromium. To elaborate an analytical method for trace elements with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) Physical interferences; 2) Spectral interferences (elemental and molecular isobaric); 3) Effect of easily ionisable elements; 4) Memory interferences. Analysing food and food raw materials, moreover environmental samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food and food raw materials, moreover environmental samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of the applied elements. So finally we could find “opportunities” to decrease or eliminate the error of the analyses of applied elements (Cr, Co, Ni, Cu, Zn, Ge, As, Se, Mo, Cd, Sn, Sb, Te, Hg, Pb, Bi). To analyse these elements in the above samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of the above elements, which can be corrected using different internal standards.Keywords: elements, environmental and food samples, ICP-MS, interference effects
Procedia PDF Downloads 504