Search results for: memory forensics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1217

Search results for: memory forensics

137 The Impact of Artificial Intelligence on Agricultural Machines and Plant Nutrition

Authors: Kirolos Gerges Yakoub Gerges

Abstract:

Self-sustaining agricultural machines act in stochastic surroundings and therefore, should be capable of perceive the surroundings in real time. This notion can be done using image sensors blended with superior device learning, mainly Deep mastering. Deep convolutional neural networks excel in labeling and perceiving colour pix and since the fee of RGB-cameras is low, the hardware cost of accurate notion relies upon heavily on memory and computation power. This paper investigates the opportunity of designing lightweight convolutional neural networks for semantic segmentation (pixel clever class) with reduced hardware requirements, to allow for embedded usage in self-reliant agricultural machines. The usage of compression techniques, a lightweight convolutional neural community is designed to carry out actual-time semantic segmentation on an embedded platform. The community is skilled on two big datasets, ImageNet and Pascal Context, to apprehend as much as four hundred man or woman instructions. The 400 training are remapped into agricultural superclasses (e.g. human, animal, sky, road, area, shelterbelt and impediment) and the capacity to provide correct actual-time perception of agricultural environment is studied. The network is carried out to the case of self-sufficient grass mowing the usage of the NVIDIA Tegra X1 embedded platform. Feeding case-unique pics to the community consequences in a fully segmented map of the superclasses within the picture. As the network remains being designed and optimized, handiest a qualitative analysis of the technique is entire on the abstract submission deadline. intending this cut-off date, the finalized layout is quantitatively evaluated on 20 annotated grass mowing pictures. Light-weight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show aggressive performance on the subject of accuracy and speed. It’s miles viable to offer value-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: centrifuge pump, hydraulic energy, agricultural applications, irrigationaxial flux machines, axial flux applications, coreless machines, PM machinesautonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 25
136 The Creation of Calcium Phosphate Coating on Nitinol Substrate

Authors: Kirill M. Dubovikov, Ekaterina S. Marchenko, Gulsharat A. Baigonakova

Abstract:

NiTi alloys are widely used as implants in medicine due to their unique properties such as superelasticity, shape memory effect and biocompatibility. However, despite these properties, one of the major problems is the release of nickel after prolonged use in the human body under dynamic stress. This occurs due to oxidation and cracking of NiTi implants, which provokes nickel segregation from the matrix to the surface and release into living tissues. As we know, nickel is a toxic element and can cause cancer, allergies, etc. One of the most popular ways to solve this problem is to create a corrosion resistant coating on NiTi. There are many coatings of this type, but not all of them have good biocompatibility, which is very important for medical implants. Coatings based on calcium phosphate phases have excellent biocompatibility because Ca and P are the main constituents of the mineral part of human bone. This fact suggests that a Ca-P coating on NiTi can enhance osteogenesis and accelerate the healing process. Therefore, the aim of this study is to investigate the structure of Ca-P coating on NiTi substrate. Plasma assisted radio frequency (RF) sputtering was used to obtain this film. This method was chosen because it allows the crystallinity and morphology of the Ca-P coating to be controlled by the sputtering parameters. It allows us to obtain three different NiTi samples with Ca-P coating. XRD, AFM, SEM and EDS were used to study the composition, structure and morphology of the coating phase. Scratch tests were carried out to evaluate the adhesion of the coating to the substrate. Wettability tests were used to investigate the hydrophilicity of the different coatings and to suggest which of them had better biocompatibility. XRD showed that the coatings of all samples were hydroxyapatite, but the matrix was represented by TiNi intermetallic compounds such as B2, Ti2Ni and Ni3Ti. The SEM shows that the densest and defect-free coating has only one sample after three hours of sputtering. Wettability tests show that the sample with the densest coating has the lowest contact angle of 40.2° and the largest free surface area of 57.17 mJ/m2, which is mostly disperse. A scratch test was carried out to investigate the adhesion of the coating to the surface and it was shown that all coatings were removed by a cohesive mechanism. However, at a load of 30N, the indenter reached the substrate in two out of three samples, except for the sample with the densest coating. It was concluded that the most promising sputtering mode was the third, which consisted of three hours of deposition. This mode produced a defect-free Ca-P coating with good wettability and adhesion.

Keywords: biocompatibility, calcium phosphate coating, NiTi alloy, radio frequency sputtering.

Procedia PDF Downloads 70
135 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution

Authors: Robin Devillers, Chantal van Dijk

Abstract:

Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languages

Keywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development

Procedia PDF Downloads 97
134 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments

Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy

Abstract:

Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.

Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing

Procedia PDF Downloads 279
133 Individual Cylinder Ignition Advance Control Algorithms of the Aircraft Piston Engine

Authors: G. Barański, P. Kacejko, M. Wendeker

Abstract:

The impact of the ignition advance control algorithms of the ASz-62IR-16X aircraft piston engine on a combustion process has been presented in this paper. This aircraft engine is a nine-cylinder 1000 hp engine with a special electronic control ignition system. This engine has two spark plugs per cylinder with an ignition advance angle dependent on load and the rotational speed of the crankshaft. Accordingly, in most cases, these angles are not optimal for power generated. The scope of this paper is focused on developing algorithms to control the ignition advance angle in an electronic ignition control system of an engine. For this type of engine, i.e. radial engine, an ignition advance angle should be controlled independently for each cylinder because of the design of such an engine and its crankshaft system. The ignition advance angle is controlled in an open-loop way, which means that the control signal (i.e. ignition advance angle) is determined according to the previously developed maps, i.e. recorded tables of the correlation between the ignition advance angle and engine speed and load. Load can be measured by engine crankshaft speed or intake manifold pressure. Due to a limited memory of a controller, the impact of other independent variables (such as cylinder head temperature or knock) on the ignition advance angle is given as a series of one-dimensional arrays known as corrective characteristics. The value of the ignition advance angle specified combines the value calculated from the primary characteristics and several correction factors calculated from correction characteristics. Individual cylinder control can proceed in line with certain indicators determined from pressure registered in a combustion chamber. Control is assumed to be based on the following indicators: maximum pressure, maximum pressure angle, indicated mean effective pressure. Additionally, a knocking combustion indicator was defined. Individual control can be applied to a single set of spark plugs only, which results from two fundamental ideas behind designing a control system. Independent operation of two ignition control systems – if two control systems operate simultaneously. It is assumed that the entire individual control should be performed for a front spark plug only and a rear spark plug shall be controlled with a fixed (or specific) offset relative to the front one or from a reference map. The developed algorithms will be verified by simulation and engine test sand experiments. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: algorithm, combustion process, radial engine, spark plug

Procedia PDF Downloads 291
132 Exploring the Psychosocial Brain: A Retrospective Analysis of Personality, Social Networks, and Dementia Outcomes

Authors: Felicia N. Obialo, Aliza Wingo, Thomas Wingo

Abstract:

Psychosocial factors such as personality traits and social networks influence cognitive aging and dementia outcomes both positively and negatively. The inherent complexity of these factors makes defining the underlying mechanisms of their influence difficult; however, exploring their interactions affords promise in the field of cognitive aging. The objective of this study was to elucidate some of these interactions by determining the relationship between social network size and dementia outcomes and by determining whether personality traits mediate this relationship. The longitudinal Alzheimer’s Disease (AD) database provided by Rush University’s Religious Orders Study/Memory and Aging Project was utilized to perform retrospective regression and mediation analyses on 3,591 participants. Participants who were cognitively impaired at baseline were excluded, and analyses were adjusted for age, sex, common chronic diseases, and vascular risk factors. Dementia outcome measures included cognitive trajectory, clinical dementia diagnosis, and postmortem beta-amyloid plaque (AB), and neurofibrillary tangle (NT) accumulation. Personality traits included agreeableness (A), conscientiousness (C), extraversion (E), neuroticism (N), and openness (O). The results show a positive correlation between social network size and cognitive trajectory (p-value = 0.004) and a negative relationship between social network size and odds of dementia diagnosis (p = 0.024/ Odds Ratio (OR) = 0.974). Only neuroticism mediates the positive relationship between social network size and cognitive trajectory (p < 2e-16). Agreeableness, extraversion, and neuroticism all mediate the negative relationship between social network size and dementia diagnosis (p=0.098, p=0.054, and p < 2e-16, respectively). All personality traits are independently associated with dementia diagnosis (A: p = 0.016/ OR = 0.959; C: p = 0.000007/ OR = 0.945; E: p = 0.028/ OR = 0.961; N: p = 0.000019/ OR = 1.036; O: p = 0.027/ OR = 0.972). Only conscientiousness and neuroticism are associated with postmortem AD pathologies; specifically, conscientiousness is negatively associated (AB: p = 0.001, NT: p = 0.025) and neuroticism is positively associated with pathologies (AB: p = 0.002, NT: p = 0.002). These results support the study’s objectives, demonstrating that social network size and personality traits are strongly associated with dementia outcomes, particularly the odds of receiving a clinical diagnosis of dementia. Personality traits interact significantly and beneficially with social network size to influence the cognitive trajectory and future dementia diagnosis. These results reinforce previous literature linking social network size to dementia risk and provide novel insight into the differential roles of individual personality traits in cognitive protection.

Keywords: Alzheimer’s disease, cognitive trajectory, personality traits, social network size

Procedia PDF Downloads 126
131 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 118
130 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia

Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic

Abstract:

This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.

Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro

Procedia PDF Downloads 249
129 Building Exoskeletons for Seismic Retrofitting

Authors: Giuliana Scuderi, Patrick Teuffel

Abstract:

The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.

Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting

Procedia PDF Downloads 419
128 Into the Dreamweaver’s World of the Mandaya and the Tboli: From Folklore to the Woven Fabric

Authors: Genevieve Jorolan Quintero

Abstract:

In Mindanao, the southern island of the Philippines, two provinces, Davao Oriental and Tboli of South Cotabato, respectively, are homes to indigenous communities known for their dream weavers. Davao Oriental is home to the Mandaya, while Lake Sebu is home to the Tboli. The dream weavers are mostly women who have continued the tradition of weaving, a spiritual practice of handicraft embodying the beliefs of the community. It is believed that a weaver is guided by the Tagamaling, or the nature spirit in Mandaya mythology, and Fu Dalu, or the spirit of the abaca among the Tboli. In the dream, the Tagamaling or Fu Dalu reveals to the weaver the design or the pattern of the dagmay as the abaca woven cloth is called among the Mandaya and the tnalak among the Tboli. The weaver then undertakes the production of this nature-spirit-inspired fabric based on her memory of the dream. This interaction between the spirit world and the human world inspired the theme of the short story with the title Loom of Dreams, published in 2015 by Kritika Kultura, an international peer-reviewed journal of language and literary/cultural studies of the Ateneo de Manila University in the Philippines. In Lake Sebu, a collection of the legendary tnalak with various designs is preserved by the cultural advocate and tnalak collector Reden S. Ulo. About a hundred tnalak designs are housed in a mini museum. The paper discusses how the dagmay and the tnalak of the two Philippine indigenous communities, the Mandaya and the Tboli, embody their folklore and cultural heritage. The specific objectives are: 1. To describe the role of the dreamweavers among the Mandaya and Tboli communities in the Philippines; 2. To analyse how folklore influences the designs on the woven fabric, the dagmay, and the tnalak, and 3. To discuss how dream-weaving helps preserve culture legacy. Ethnography was used in the conduct of this research. Specifically, the following data collection methods were done: 1. a series of visits to the Mandaya and Tboli communities; 2. face-to-face interviews with the respondents from the communities, and 3. the recording of the interviews with the knowledge-bearers and material culture keepers from both communities, the narratives of which were used as a basis for the data analysis. The influence of folklore in the culture and the arts of the indigenous communities is significantly evident in the designs of the dagmay and the tnalak. As the dream weavers continue to weave the dagmay and the tnalak, this cultural legacy will continue to prosper and be preserved for posterity.

Keywords: dreamweaver's, Mandaya, mindanao, Philippine folklore, Tboli

Procedia PDF Downloads 97
127 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 121
126 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 88
125 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 25
124 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 71
123 Microglia Activation in Animal Model of Schizophrenia

Authors: Esshili Awatef, Manitz Marie-Pierre, Eßlinger Manuela, Gerhardt Alexandra, Plümper Jennifer, Wachholz Simone, Friebe Astrid, Juckel Georg

Abstract:

Maternal immune activation (MIA) resulting from maternal viral infection during pregnancy is a known risk factor for schizophrenia. The neural mechanisms by which maternal infections increase the risk for schizophrenia remain unknown, although the prevailing hypothesis argues that an activation of the maternal immune system induces changes in the maternal-fetal environment that might interact with fetal brain development. It may lead to an activation of fetal microglia inducing long-lasting functional changes of these cells. Based on post-mortem analysis showing an increased number of activated microglial cells in patients with schizophrenia, it can be hypothesized that these cells contribute to disease pathogenesis and may actively be involved in gray matter loss observed in such patients. In the present study, we hypothesize that prenatal treatment with the inflammatory agent Poly(I:C) during embryogenesis at contributes to microglial activation in the offspring, which may, therefore, represent a contributing factor to the pathogenesis of schizophrenia and underlines the need for new pharmacological treatment options. Pregnant rats were treated with intraperitoneal injections a single dose of Poly(I:C) or saline on gestation day 17. Brains of control and Poly(I:C) offspring, were removed and into 20-μm-thick coronal sections were cut by using a Cryostat. Brain slices were fixed and immunostained with ba1 antibody. Subsequently, Iba1-immunoreactivity was detected using a secondary antibody, goat anti-rabbit. The sections were viewed and photographed under microscope. The immunohistochemical analysis revealed increases in microglia cell number in the prefrontal cortex, in offspring of poly(I:C) treated-rats as compared to the controls injected with NaCl. However, no significant differences were observed in microglia activation in the cerebellum among the groups. Prenatal immune challenge with Poly(I:C) was able to induce long-lasting changes in the offspring brains. This lead to a higher activation of microglia cells in the prefrontal cortex, a brain region critical for many higher brain functions, including working memory and cognitive flexibility. which might be implicated in possible changes in cortical neuropil architecture in schizophrenia. Further studies will be needed to clarify the association between microglial cells activation and schizophrenia-related behavioral alterations.

Keywords: Microglia, neuroinflammation, PolyI:C, schizophrenia

Procedia PDF Downloads 415
122 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning

Authors: Jasmin Cowin, Amany Alkhayat

Abstract:

This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.

Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality

Procedia PDF Downloads 236
121 Development of Taiwanese Sign Language Receptive Skills Test for Deaf Children

Authors: Hsiu Tan Liu, Chun Jung Liu

Abstract:

It has multiple purposes to develop a sign language receptive skills test. For example, this test can be used to be an important tool for education and to understand the sign language ability of deaf children. There is no available test for these purposes in Taiwan. Through the discussion of experts and the references of standardized Taiwanese Sign Language Receptive Test for adults and adolescents, the frame of Taiwanese Sign Language Receptive Skills Test (TSL-RST) for deaf children was developed, and the items were further designed. After multiple times of pre-trials, discussions and corrections, TSL-RST is finally developed which can be conducted and scored online. There were 33 deaf children who agreed to be tested from all three deaf schools in Taiwan. Through item analysis, the items were picked out that have good discrimination index and fair difficulty index. Moreover, psychometric indexes of reliability and validity were established. Then, derived the regression formula was derived which can predict the sign language receptive skills of deaf children. The main results of this study are as follows. (1). TSL-RST includes three sub-test of vocabulary comprehension, syntax comprehension and paragraph comprehension. There are 21, 20, and 9 items in vocabulary comprehension, syntax comprehension, and paragraph comprehension, respectively. (2). TSL-RST can be conducted individually online. The sign language ability of deaf students can be calculated fast and objectively, so that they can get the feedback and results immediately. This can also contribute to both teaching and research. The most subjects can complete the test within 25 minutes. While the test procedure, they can answer the test questions without relying on their reading ability or memory capacity. (3). The sub-test of the vocabulary comprehension is the easiest one, syntax comprehension is harder than vocabulary comprehension and the paragraph comprehension is the hardest. Each of the three sub-test and the whole test are good in item discrimination index. (4). The psychometric indices are good, including the internal consistency reliability (Cronbach’s α coefficient), test-retest reliability, split-half reliability, and content validity. The sign language ability are significantly related to non-verbal IQ, the teachers’ rating to the students’ sign language ability and students’ self-rating to their own sign language ability. The results showed that the higher grade students have better performance than the lower grade students, and students with deaf parent perform better than those with hearing parent. These results made TLS-RST have great discriminant validity. (5). The predictors of sign language ability of primary deaf students are age and years of starting to learn sign language. The results of this study suggested that TSL-RST can effectively assess deaf student’s sign language ability. This study also proposed a model to develop a sign language tests.

Keywords: comprehension test, elementary school, sign language, Taiwan sign language

Procedia PDF Downloads 187
120 Meiji Centennial as a Media Event: Ideas for Upcoming Turkish Republic Centennial

Authors: Hasan Topacoglu

Abstract:

The Meiji Restoration was a chain of events that restored Japan in 1868 and considered as the beginning of Japanese Modernization by many scholars. In 1968, to honor its modern incarnation, Japan celebrated Meiji Centennial as one of the biggest Media Events in the country after the World War II. It was celebrated all around the country throughout the year following with a central event in Tokyo. Meanwhile, Japanese scholars started an opposition movement and claimed that Government was using this event to raise nationalism, pointing at Government’s statement on the meaning of Meiji. Most of the scholars, unfortunately, were hooked into the ideological problem of the Government’s way of planning and evaluated it as a failure. However, scholars missed out an important point that apart from the central event in Tokyo, each city planned its own event and celebrated it on a different date, also with a different theme. For example, Kyoto showed a regional characteristic and focused on Kyoto’s own culture, tradition etc., and highlighted a further past than 100 years. This was mainly because some areas/cities had a different ‘memory’ for Meiji Restoration than Tokyo which was reflected through the way they celebrated Meiji Centennial. On the other hand, 2023 will be the year of Turkish Republic Centennial. A year which will be marked by national and maybe even international events. Although an official committee has not been announced yet, The 2023 Vision, a list of goals has been released by the Government to coincide with the centenary of the Republic of Turkey in 2023 and there are some ongoing projects that are planned to be completed by then. By looking at the content of these projects, it is possible to say that Government is aiming to focus on Modernization through the Centennial. However, some of the projects are already showing some interesting characteristics such as the Istanbul New Airport whose design is inspired by Selimiye Mosque’s Islamic-Ottoman figure. It is true that Turkey and Japan have different historical backgrounds and the timeline of the Meiji Restoration and Foundation of Turkish Republic are different. Therefore, a particular comparison between these two events is not justified. However, they may have more in common than we are up to think because, each country marked the start of a new nation conceived on modern principles. For that reason, it is important to understand the similarities or differences between Meiji Centennial and Turkish Republic Centennial as a media event. This study introduces Meiji Centennial as a media event and analyses opposition movement along with the meaning of Meiji Centennial. Additionally, it explains regional characteristic differences and gives Kyoto as an example. Moreover, it introduces some of the ongoing Centennial projects in Turkey and analyses the meaning of the Turkish Republic Centennial through these projects. Without comparing Japan and Turkey, it explains the case of Japan but the discussion centers on deepening our understanding of Centennial as a Media Event and remarks some important aspects for Turkey’s upcoming Centennial events.

Keywords: media events, Meiji centennial, the 2023 vision, Turkish republic centennial

Procedia PDF Downloads 328
119 Electronic Structure Studies of Mn Doped La₀.₈Bi₀.₂FeO₃ Multiferroic Thin Film Using Near-Edge X-Ray Absorption Fine Structure

Authors: Ghazala Anjum, Farooq Hussain Bhat, Ravi Kumar

Abstract:

Multiferroic materials are vital for new application and memory devices, not only because of the presence of multiple types of domains but also as a result of cross correlation between coexisting forms of magnetic and electrical orders. In spite of wide studies done on multiferroic bulk ceramic materials their realization in thin film form is yet limited due to some crucial problems. During the last few years, special attention has been devoted to synthesis of thin films like of BiFeO₃. As they allow direct integration of the material into the device technology. Therefore owing to the process of exploration of new multiferroic thin films, preparation, and characterization of La₀.₈Bi₀.₂Fe₀.₇Mn₀.₃O₃ (LBFMO3) thin film on LaAlO₃ (LAO) substrate with LaNiO₃ (LNO) being the buffer layer has been done. The fact that all the electrical and magnetic properties are closely related to the electronic structure makes it inevitable to study the electronic structure of system under study. Without the knowledge of this, one may never be sure about the mechanism responsible for different properties exhibited by the thin film. Literature review reveals that studies on change in atomic and the hybridization state in multiferroic samples are still insufficient except few. The technique of x-ray absorption (XAS) has made great strides towards the goal of providing such information. It turns out to be a unique signature to a given material. In this milieu, it is time honoured to have the electronic structure study of the elements present in the LBFMO₃ multiferroic thin film on LAO substrate with buffer layer of LNO synthesized by RF sputtering technique. We report the electronic structure studies of well characterized LBFMO3 multiferroic thin film on LAO substrate with LNO as buffer layer using near-edge X-ray absorption fine structure (NEXAFS). Present exploration has been performed to find out the valence state and crystal field symmetry of ions present in the system. NEXAFS data of O K- edge spectra reveals a slight shift in peak position along with growth in intensities of low energy feature. Studies of Mn L₃,₂- edge spectra indicates the presence of Mn³⁺/Mn⁴⁺ network apart from very small contribution from Mn²⁺ ions in the system that substantiates the magnetic properties exhibited by the thin film. Fe L₃,₂- edge spectra along with spectra of reference compound reveals that Fe ions are present in +3 state. Electronic structure and valence state are found to be in accordance with the magnetic properties exhibited by LBFMO/LNO/LAO thin film.

Keywords: magnetic, multiferroic, NEXAFS, x-ray absorption fine structure, XMCD, x-ray magnetic circular dichroism

Procedia PDF Downloads 152
118 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations

Authors: Priyanka Bharti

Abstract:

Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.

Keywords: cognition, visual, decision making, graphics, recognition

Procedia PDF Downloads 268
117 The Degree Project-Course in Swedish Teacher Education – Deliberative and Transformative Perspectives on the Formative Assessment Practice

Authors: Per Blomqvist

Abstract:

The overall aim of this study is to highlight how the degree project-course in teacher education has developed over time at Swedish universities, above all regarding changes in the formative assessment practices in relation to student's opportunities to take part in writing processes that can develop both their independent critical thinking, subject knowledge, and academic writing skills. Theoretically, the study is based on deliberative and transformative perspectives of teaching academic writing in higher education. The deliberative perspective is motivated by the fact that it is the universities and their departments' responsibility to give the students opportunities to develop their academic writing skills, while there is little guidance on how this can be implemented. The transformative perspective is motivated by the fact that education needs to be adapted to the student's prior knowledge and developed in relation to the student group. Given the academisation of education and the new student groups, this is a necessity. The empirical data consists of video recordings of teacher groups' conversations at three Swedish universities. The conversations were conducted as so-called collective remembering interviews, a method to stimulate the participants' memory through social interaction, and focused on addressing issues on how the degree project-course in teacher education has changed over time. Topic analysis was used to analyze the conversations in order to identify common descriptions and expressions among the teachers. The result highlights great similarities in how the degree project-course has changed over time, both from a deliberative and a transformative perspective. The course is characterized by a “strong framing,” where the teachers have great control over the work through detailed instructions for the writing process and detailed templates for the text. This is justified by the fact that the education has been adapted based on the student teachers' lack of prior subject knowledge. The strong framing places high demands on continuous discussions between teachers about, for example, which tools the students have with them and which linguistic and textual tools are offered in the education. The teachers describe that such governance often leads to conflicts between teachers from different departments because reading and writing are always part of cultural contexts and are linked to different knowledge, traditions, and values. The problem that is made visible in this study raises questions about how students' opportunities to develop independence and make critical judgments in academic writing are affected if the writing becomes too controlled and if passing students becomes the main goal of education.

Keywords: formative assessment, academic writing, degree project, higher education, deliberative perspective, transformative perspective

Procedia PDF Downloads 64
116 The Markers -mm and dämmo in Amharic: Developmental Approach

Authors: Hayat Omar

Abstract:

Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.

Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics

Procedia PDF Downloads 133
115 A Review of Brain Implant Device: Current Developments and Applications

Authors: Ardiansyah I. Ryan, Ashsholih K. R., Fathurrohman G. R., Kurniadi M. R., Huda P. A

Abstract:

The burden of brain-related disease is very high. There are a lot of brain-related diseases with limited treatment result and thus raise the burden more. The Parkinson Disease (PD), Mental Health Problem, or Paralysis of extremities treatments had risen concern, as the patients for those diseases usually had a low quality of life and low chance to recover fully. There are also many other brain or related neural diseases with the similar condition, mainly the treatments for those conditions are still limited as our understanding of the brain function is insufficient. Brain Implant Technology had given hope to help in treating this condition. In this paper, we examine the current update of the brain implant technology. Neurotechnology is growing very rapidly worldwide. The United States Food and Drug Administration (FDA) has approved the use of Deep Brain Stimulation (DBS) as a brain implant in humans. As for neural implant both the cochlear implant and retinal implant are approved by FDA too. All of them had shown a promising result. DBS worked by stimulating a specific region in the brain with electricity. This device is planted surgically into a very specific region of the brain. This device consists of 3 main parts: Lead (thin wire inserted into the brain), neurostimulator (pacemaker-like device, planted surgically in the chest) and an external controller (to turn on/off the device by patient/programmer). FDA had approved DBS for the treatment of PD, Pain Management, Epilepsy and Obsessive Compulsive Disorder (OCD). The target treatment of DBS in PD is to reduce the tremor and dystonia symptoms. DBS has been showing the promising result in animal and limited human trial for other conditions such as Alzheimer, Mental Health Problem (Major Depression, Tourette Syndrome), etc. Every surgery has risks of complications, although in DBS the chance is very low. DBS itself had a very satisfying result as long as the subject criteria to be implanted this device based on indication and strictly selection. Other than DBS, there are several brain implant devices that still under development. It was included (not limited to) implant to treat paralysis (In Spinal Cord Injury/Amyotrophic Lateral Sclerosis), enhance brain memory, reduce obesity, treat mental health problem and treat epilepsy. The potential of neurotechnology is unlimited. When brain function and brain implant were fully developed, it may be one of the major breakthroughs in human history like when human find ‘fire’ for the first time. Support from every sector for further research is very needed to develop and unveil the true potential of this technology.

Keywords: brain implant, deep brain stimulation (DBS), deep brain stimulation, Parkinson

Procedia PDF Downloads 154
114 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 188
113 The Negative Implications of Childhood Obesity and Malnutrition on Cognitive Development

Authors: Stephanie Remedios, Linda Veronica Rios

Abstract:

Background. Pediatric obesity is a serious health problem linked to multiple physical diseases and ailments, including diabetes, heart disease, and joint issues. While research has shown pediatric obesity can bring about an array of physical illnesses, it is less known how such a condition can affect children’s cognitive development. With childhood overweight and obesity prevalence rates on the rise, it is essential to understand the scope of their cognitive consequences. The present review of the literature tested the hypothesis that poor physical health, such as childhood obesity or malnutrition, negatively impacts a child’s cognitive development. Methodology. A systematic review was conducted to determine the relationship between poor physical health and lower cognitive functioning in children ages 4-16. Electronic databases were searched for studies dating back to ten years. The following databases were used: Science Direct, FIU Libraries, and Google Scholar. Inclusion criteria consisted of peer-reviewed academic articles written in English from 2012 to 2022 that analyzed the relationship between childhood malnutrition and obesity on cognitive development. A total of 17,000 articles were obtained, of which 16,987 were excluded for not addressing the cognitive implications exclusively. Of the acquired articles, 13 were retained. Results. Research suggested a significant connection between diet and cognitive development. Both diet and physical activity are strongly correlated with higher cognitive functioning. Cognitive domains explored in this work included learning, memory, attention, inhibition, and impulsivity. IQ scores were also considered objective representations of overall cognitive performance. Studies showed physical activity benefits cognitive development, primarily for executive functioning and language development. Additionally, children suffering from pediatric obesity or malnutrition were found to score 3-10 points lower in IQ scores when compared to healthy, same-aged children. Conclusion. This review provides evidence that the presence of physical activity and overall physical health, including appropriate diet and nutritional intake, has beneficial effects on cognitive outcomes. The primary conclusion from this research is that childhood obesity and malnutrition show detrimental effects on cognitive development in children, primarily with learning outcomes. Assuming childhood obesity and malnutrition rates continue their current trade, it is essential to understand the complete physical and psychological implications of obesity and malnutrition in pediatric populations. Given the limitations encountered through our research, further studies are needed to evaluate the areas of cognition affected during childhood.

Keywords: childhood malnutrition, childhood obesity, cognitive development, cognitive functioning

Procedia PDF Downloads 117
112 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 130
111 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring

Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng

Abstract:

Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.

Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring

Procedia PDF Downloads 236
110 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 114
109 Literature Review on the Controversies and Changes in the Insanity Defense since the Wild Beast Standard in 1723 until the Federal Insanity Defense Reform Act of 1984

Authors: Jane E. Hill

Abstract:

Many variables led to the changes in the insanity defense since the Wild Beast Standard of 1723 until the Federal Insanity Defense Reform Act of 1984. The insanity defense is used in criminal trials and argued that the defendant is ‘not guilty by reason of insanity’ because the individual was unable to distinguish right from wrong during the time they were breaking the law. The issue that surrounds whether or not to use the insanity defense in the criminal court depends on the mental state of the defendant at the time the criminal act was committed. This leads us to the question of did the defendant know right from wrong when they broke the law? In 1723, The Wild Beast Test stated that to be exempted from punishment the individual is totally deprived of their understanding and memory and doth not know what they are doing. The Wild Beast Test became the standard in England for over seventy-five years. In 1800, James Hadfield attempted to assassinate King George III. He only made the attempt because he was having delusional beliefs. The jury and the judge gave a verdict of not guilty. However, to legal confine him; the Criminal Lunatics Act was enacted. Individuals that were deemed as ‘criminal lunatics’ and were given a verdict of not guilty would be taken into custody and not be freed into society. In 1843, the M'Naghten test required that the individual did not know the quality or the wrongfulness of the offense at the time they committed the criminal act(s). Daniel M'Naghten was acquitted on grounds of insanity. The M'Naghten Test is still a modern concept of the insanity defense used in many courts today. The Irresistible Impulse Test was enacted in the United States in 1887. The Irresistible Impulse Test suggested that offenders that could not control their behavior while they were committing a criminal act were not deterrable by the criminal sanctions in place; therefore no purpose would be served by convicting the offender. Due to the criticisms of the latter two contentions, the federal District of Columbia Court of Appeals ruled in 1954 to adopt the ‘product test’ by Sir Isaac Ray for insanity. The Durham Rule also known as the ‘product test’, stated an individual is not criminally responsible if the unlawful act was the product of mental disease or defect. Therefore, the two questions that need to be asked and answered are (1) did the individual have a mental disease or defect at the time they broke the law? and (2) was the criminal act the product of their disease or defect? The Durham courts failed to clearly define ‘mental disease’ or ‘product.’ Therefore, trial courts had difficulty defining the meaning of the terms and the controversy continued until 1972 when the Durham rule was overturned in most places. Therefore, the American Law Institute combined the M'Naghten test with the irresistible impulse test and The United States Congress adopted an insanity test for the federal courts in 1984.

Keywords: insanity defense, psychology law, The Federal Insanity Defense Reform Act of 1984, The Wild Beast Standard in 1723

Procedia PDF Downloads 143
108 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 167