Search results for: deceptive features
3024 Investigating Informal Vending Practices and Social Encounters along Commercial Streets in Cairo, Egypt
Authors: Dalya M. Hassan
Abstract:
Marketplaces and commercial streets represent some of the most used and lively urban public spaces. Not only do they provide an outlet for commercial exchange, but they also facilitate social and recreational encounters. Such encounters can be influenced by both formal as well as informal vending activities. This paper explores and documents forms of informal vending practices and how they relate to social patterns that occur along the sidewalks of Commercial Streets in Cairo. A qualitative single case study approach of ‘Midan El Gami’ marketplace in Heliopolis, Cairo is adopted. The methodology applied includes direct and walk-by observations for two main commercial streets in the marketplace. Four zoomed-in activity maps are also done for three sidewalk segments that displayed varying vending and social features. Main findings include a documentation and classification of types of informal vending practices as well as a documentation of vendors’ distribution patterns in the urban space. Informal vending activities mainly included informal street vendors and shop spillovers, either as product or seating spillovers. Results indicated that staying and lingering activities were more prevalent in sidewalks that had certain physical features, such as diversity of shops, shaded areas, open frontages, and product or seating spillovers. Moreover, differences in social activity patterns were noted between sidewalks with street vendors and sidewalks with spillovers. While the first displayed more buying, selling, and people watching activities, the latter displayed more social relations and bonds amongst traders’ communities and café patrons. Ultimately, this paper provides a documentation, which suggests that informal vending can have a positive influence on creating a lively commercial street and on resulting patterns of use on the sidewalk space. The results can provide a basis for further investigations and analysis concerning this topic. This could aid in better accommodating informal vending activities within the design of future commercial streets.Keywords: commercial streets, informal vending practices, sidewalks, social encounters
Procedia PDF Downloads 1633023 Discourse Analysis and Semiotic Researches: Using Michael Halliday's Sociosemiotic Theory
Authors: Deyu Yuan
Abstract:
Discourse analysis as an interdisciplinary approach has more than 60-years-history since it was first named by Zellig Harris in 'Discourse Analysis' on Language in 1952. Ferdinand de Saussure differentiated the 'parole' from the 'langue' that established the principle of focusing on language but not speech. So the rising of discourse analysis can be seen as a discursive turn for the entire language research that closely related to the theory of Speech act. Critical discourse analysis becomes the mainstream of contemporary language research through drawing upon M. A. K. Halliday's socio-semiotic theory and Foucault, Barthes, Bourdieu's views on the sign, discourse, and ideology. So in contrast to general semiotics, social semiotics mainly focuses on parole and the application of semiotic theories to some applicable fields. The article attempts to discuss this applicable sociosemiotics and show the features of it that differ from the Saussurian and Peircian semiotics in four aspects: 1) the sign system is about meaning-generation resource in the social context; 2) the sign system conforms to social and cultural changes with the form of metaphor and connotation; 3) sociosemiotics concerns about five applicable principles including the personal authority principle, non-personal authority principle, consistency principle, model demonstration principle, the expertise principle to deepen specific communication; 4) the study of symbolic functions is targeted to the characteristics of ideational, interpersonal and interactional function in social communication process. Then the paper describes six features which characterize this sociosemiotics as applicable semiotics: social, systematic, usable interdisciplinary, dynamic, and multi-modal characteristics. Thirdly, the paper explores the multi-modal choices of sociosemiotics in the respects of genre, discourse, and style. Finally, the paper discusses the relationship between theory and practice in social semiotics and proposes a relatively comprehensive theoretical framework for social semiotics as applicable semiotics.Keywords: discourse analysis, sociosemiotics, pragmatics, ideology
Procedia PDF Downloads 3513022 6G: Emerging Architectures, Technologies and Challenges
Authors: Abdulrahman Yarali
Abstract:
The advancement of technology never stops because the demands for improved internet and communication connectivity are increasing. Just as 5G networks are rolling out, the world has begun to talk about the sixth-generation networks (6G). The semantics of 6G are more or less the same as 5G networks because they strive to boost speeds, machine-to-machine (M2M) communication, and latency reduction. However, some of the distinctive focuses of 6G include the optimization of networks of machines through super speeds and innovative features. This paper discusses many aspects of the technologies, architectures, challenges, and opportunities of 6G wireless communication systems.Keywords: 6G, characteristics, infrastructures, technologies, AI, ML, IoT, applications
Procedia PDF Downloads 253021 A Pragmatic Approach of Memes Created in Relation to the COVID-19 Pandemic
Authors: Alexandra-Monica Toma
Abstract:
Internet memes are an element of computer mediated communication and an important part of online culture that combines text and image in order to generate meaning. This term coined by Richard Dawkings refers to more than a mere way to briefly communicate ideas or emotions, thus naming a complex and an intensely perpetuated phenomenon in the virtual environment. This paper approaches memes as a cultural artefact and a virtual trope that mirrors societal concerns and issues, and analyses the pragmatics of their use. Memes have to be analysed in series, usually relating to some image macros, which is proof of the interplay between imitation and creativity in the memes’ writing process. We believe that their potential to become viral relates to three key elements: adaptation to context, reference to a successful meme series, and humour (jokes, irony, sarcasm), with various pragmatic functions. The study also uses the concept of multimodality and stresses how the memes’ text interacts with the image, discussing three types of relations: symmetry, amplification, and contradiction. Moreover, the paper proves that memes could be employed as speech acts with illocutionary force, when the interaction between text and image is enriched through the connection to a specific situation. The features mentioned above are analysed in a corpus that consists of memes related to the COVID-19 pandemic. This corpus shows them to be highly adaptable to context, which helps build the feeling of connection and belonging in an otherwise tremendously fragmented world. Some of them are created based on well-known image macros, and their humour results from an intricate dialogue between texts and contexts. Memes created in relation to the COVID-19 pandemic can be considered speech acts and are often used as such, as proven in the paper. Consequently, this paper tackles the key features of memes, makes a thorough analysis of the memes sociocultural, linguistic, and situational context, and emphasizes their intertextuality, with special accent on their illocutionary potential.Keywords: context, memes, multimodality, speech acts
Procedia PDF Downloads 2003020 The Use of a Miniature Bioreactor as Research Tool for Biotechnology Process Development
Authors: Muhammad Zainuddin Arriafdi, Hamudah Hakimah Abdullah, Mohd Helmi Sani, Wan Azlina Ahmad, Muhd Nazrul Hisham Zainal Alam
Abstract:
The biotechnology process development demands numerous experimental works. In laboratory environment, this is typically carried out using a shake flask platform. This paper presents the design and fabrication of a miniature bioreactor system as an alternative research tool for bioprocessing. The working volume of the reactor is 100 ml, and it is made of plastic. The main features of the reactor included stirring control, temperature control via the electrical heater, aeration strategy through a miniature air compressor, and online optical cell density (OD) sensing. All sensors and actuators integrated into the reactor was controlled using an Arduino microcontroller platform. In order to demonstrate the functionality of such miniature bioreactor concept, series of batch Saccharomyces cerevisiae fermentation experiments were performed under various glucose concentrations. Results attained from the fermentation experiments were utilized to solve the Monod equation constants, namely the saturation constant, Ks, and cells maximum growth rate, μmax as to further highlight the usefulness of the device. The mixing capacity of the reactor was also evaluated. It was found that the results attained from the miniature bioreactor prototype were comparable to results achieved using a shake flask. The unique features of the device as compared to shake flask platform is that the reactor mixing condition is much more comparable to a lab-scale bioreactor setup. The prototype is also integrated with an online OD sensor, and as such, no sampling was needed to monitor the progress of the reaction performed. Operating cost and medium consumption are also low and thus, making it much more economical to be utilized for biotechnology process development compared to lab-scale bioreactors.Keywords: biotechnology, miniature bioreactor, research tools, Saccharomyces cerevisiae
Procedia PDF Downloads 1173019 A Mathematical Framework for Expanding a Railway’s Theoretical Capacity
Authors: Robert L. Burdett, Bayan Bevrani
Abstract:
Analytical techniques for measuring and planning railway capacity expansion activities have been considered in this article. A preliminary mathematical framework involving track duplication and section sub divisions is proposed for this task. In railways, these features have a great effect on network performance and for this reason they have been considered. Additional motivations have also arisen from the limitations of prior models that have not included them.Keywords: capacity analysis, capacity expansion, railways, track sub division, track duplication
Procedia PDF Downloads 3593018 A Trends Analysis of Yatch Simulator
Authors: Jae-Neung Lee, Keun-Chang Kwak
Abstract:
This paper describes an analysis of Yacht Simulator international trends and also explains about Yacht. Examples of yacht Simulator using Yacht Simulator include image processing for totaling the total number of vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT (scale invariant features transform) matching, and application of median filter and thresholding.Keywords: yacht simulator, simulator, trends analysis, SIFT
Procedia PDF Downloads 4323017 Designing an Operational Control System for the Continuous Cycle of Industrial Technological Processes Using Fuzzy Logic
Authors: Teimuraz Manjapharashvili, Ketevani Manjaparashvili
Abstract:
Fuzzy logic is a modeling method for complex or ill-defined systems and is a relatively new mathematical approach. Its basis is to consider overlapping cases of parameter values and define operations to manipulate these cases. Fuzzy logic can successfully create operative automatic management or appropriate advisory systems. Fuzzy logic techniques in various operational control technologies have grown rapidly in the last few years. Fuzzy logic is used in many areas of human technological activity. In recent years, fuzzy logic has proven its great potential, especially in the automation of industrial process control, where it allows to form of a control design based on the experience of experts and the results of experiments. The engineering of chemical technological processes uses fuzzy logic in optimal management, and it is also used in process control, including the operational control of continuous cycle chemical industrial, technological processes, where special features appear due to the continuous cycle and correct management acquires special importance. This paper discusses how intelligent systems can be developed, in particular, how fuzzy logic can be used to build knowledge-based expert systems in chemical process engineering. The implemented projects reveal that the use of fuzzy logic in technological process control has already given us better solutions than standard control techniques. Fuzzy logic makes it possible to develop an advisory system for decision-making based on the historical experience of the managing operator and experienced experts. The present paper deals with operational control and management systems of continuous cycle chemical technological processes, including advisory systems. Because of the continuous cycle, many features are introduced in them compared to the operational control of other chemical technological processes. Among them, there is a greater risk of transitioning to emergency mode; the return from emergency mode to normal mode must be done very quickly due to the impossibility of stopping the technological process due to the release of defective products during this period (i.e., receiving a loss), accordingly, due to the need for high qualification of the operator managing the process, etc. For these reasons, operational control systems of continuous cycle chemical technological processes have been specifically discussed, as they are different systems. Special features of such systems in control and management were brought out, which determine the characteristics of the construction of control and management systems. To verify the findings, the development of an advisory decision-making information system for operational control of a lime kiln using fuzzy logic, based on the creation of a relevant expert-targeted knowledge base, was discussed. The control system has been implemented in a real lime production plant with a lime burn kiln, which has shown that suitable and intelligent automation improves operational management, reduces the risks of releasing defective products, and, therefore, reduces costs. The special advisory system was successfully used in the said plant both for the improvement of operational management and, if necessary, for the training of new operators due to the lack of an appropriate training institution.Keywords: chemical process control systems, continuous cycle industrial technological processes, fuzzy logic, lime kiln
Procedia PDF Downloads 283016 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5283015 Reduced Lung Volume: A Possible Cause of Stuttering
Authors: Shantanu Arya, Sachin Sakhuja, Gunjan Mehta, Sanjay Munjal
Abstract:
Stuttering may be defined as a speech disorder affecting the fluency domain of speech and characterized by covert features like word substitution, omittance and circumlocution and overt features like prolongation of sound, syllables and blocks etc. Many etiologies have been postulated to explain stuttering based on various experiments and research. Moreover, Breathlessness has also been reported by many individuals with stuttering for which breathing exercises are generally advised. However, no studies reporting objective evaluation of the pulmonary capacity and further objective assessment of the efficacy of breathing exercises have been conducted. Pulmonary Function Test which evaluates parameters like Forced Vital Capacity, Peak Expiratory Flow Rate, Forced expiratory flow Rate can be used to study the pulmonary behavior of individuals with stuttering. The study aimed: a) To identify speech motor & physiologic behaviours associated with stuttering by administering PFT. b) To recognize possible reasons for an association between speech motor behaviour & stuttering severity. In this regard, PFT tests were administered on individuals who reported signs and symptoms of stuttering and showed abnormal scores on Stuttering Severity Index. Parameters like Forced Vital Capacity, Forced Expiratory Volume, Peak Expiratory Flow Rate (L/min), Forced Expiratory Flow Rate (L/min) were evaluated and correlated with scores of Stuttering Severity Index. Results showed significant decrease in the parameters (lower than normal scores) in individuals with established stuttering. Strong correlation was also found between degree of stuttering and the degree of decrease in the pulmonary volumes. Thus, it is evident that fluent speech requires strong support of lung pressure and requisite volumes. Further research in demonstrating the efficacy of abdominal breathing exercises in this regard is needed.Keywords: forced expiratory flow rate, forced expiratory volume, forced vital capacity, peak expiratory flow rate, stuttering
Procedia PDF Downloads 2753014 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 1083013 Improving Security by Using Secure Servers Communicating via Internet with Standalone Secure Software
Authors: Carlos Gonzalez
Abstract:
This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software, we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. This paper also includes an analysis of intruders and describes possible responses to detect threats.Keywords: internet, secure software, threats, cryptography process
Procedia PDF Downloads 3333012 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 913011 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 2623010 Lithuanian Sign Language Literature: Metaphors at the Phonological Level
Authors: Anželika Teresė
Abstract:
In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics
Procedia PDF Downloads 1363009 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 1273008 A Theoretical Study on Pain Assessment through Human Facial Expresion
Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee
Abstract:
A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)
Procedia PDF Downloads 3463007 Sociolinguistic Aspects and Language Contact, Lexical Consequences in Francoprovençal Settings
Authors: Carmela Perta
Abstract:
In Italy the coexistence of standard language, its varieties and different minority languages - historical and migration languages - has been a way to study language contact in different directions; the focus of most of the studies is either the relations among the languages of the social repertoire, or the study of contact phenomena occurring in a particular structural level. However, studies on contact facts in relation to a given sociolinguistic situation of the speech community are still not present in literature. As regard the language level to investigate from the perspective of contact, it is commonly claimed that the lexicon is the most volatile part of language and most likely to undergo change due to superstrate influence, indeed first lexical features are borrowed, then, under long term cultural pressure, structural features may also be borrowed. The aim of this paper is to analyse language contact in two historical minority communities where Francoprovençal is spoken, in relation to their sociolinguistic situation. In this perspective, firstly lexical borrowings present in speakers’ speech production will be examined, trying to find a possible correlation between this part of the lexicon and informants’ sociolinguistic variables; secondly a possible correlation between a particular community sociolinguistic situation and lexical borrowing will be found. Methods used to collect data are based on the results obtained from 24 speakers in both the villages; the speaker group in the two communities consisted of 3 males and 3 females in each of four age groups, ranging in age from 9 to 85, and then divided into five groups according to their occupations. Speakers were asked to describe a sequence of pictures naming common objects and then describing scenes when they used these objects: they are common objects, frequently pronounced and belonging to semantic areas which are usually resistant and which are thought to survive. A subset of this task, involving 19 items with Italian source is examined here: in order to determine the significance of the independent variables (social factors) on the dependent variable (lexical variation) the statistical package SPSS, particularly the linear regression, was used.Keywords: borrowing, Francoprovençal, language change, lexicon
Procedia PDF Downloads 3723006 Role of Symbolism in the Journey towards Spirituality: A Case Study of Mosque Architecture in Bahrain
Authors: Ayesha Agha Shah
Abstract:
The purpose of a mosque or a place of worship is to build a spiritual relation with God. If the sense of spirituality is not achieved, then sacred architecture appears to be lacking depth. Form and space play a significant role to enhance the architectural quality to impart a divine feel to a place. To achieve this divine feeling, form and space, and unity of opposites, either abstract or symbolic can be employed. It is challenging to imbue the emptiness of a space with qualitative experience. Mosque architecture mostly entails traditional forms and design typology. This approach for Muslim worship produces distinct landmarks in the urban neighborhoods of Muslim societies, while creating a great sense of spirituality. The universal symbolic characters in the mosque architecture had prototype geometrical forms for a long time in history. However, modern mosques have deviated from this approach to employ different built elements and symbolism, which are often hard to be identified as related to mosques or even as Islamic. This research aims to explore the sense of spirituality in modern mosques and questions whether the modification of geometrical features produce spirituality in the same manner. The research also seeks to investigate the role of ‘geometry’ in the modern mosque architecture. The research employs the analytical study of some modern mosque examples in the Kingdom of Bahrain, reflecting on the geometry and symbolism adopted in the new mosque architecture design. It buttresses the analysis by the engagement of people’s perceptions derived using a survey of opinions. The research expects to see the significance of geometrical architectural elements in the mosque designs. It will find answers to the questions such as; what is the role of the form of the mosque, interior spaces and the effect of the modified symbolic features in the modern mosque design? How can the symbolic geometry, forms and spaces of a mosque invite a believer to leave the worldly environment behind and move towards spirituality?Keywords: geometry, mosque architecture, spirituality, symbolism
Procedia PDF Downloads 1153005 Sociolinguistic and Classroom Functions of Using Code-Switching in CLIL Context
Authors: Khatuna Buskivadze
Abstract:
The aim of the present study is to investigate the sociolinguistic and classroom functions and frequency of Teacher’s Code Switching (CS) in the Content and Language Integrated (CLIL) Lesson. Nowadays, Georgian society struggles to become the part of the European world, the English language itself plays a role in forming new generations with European values. Based on our research conducted in 2019, out of all 114 private schools in Tbilisi, full- programs of CLIL are taught in 7 schools, while only some subjects using CLIL are conducted in 3 schools. The goal of the former research was to define the features of Content and Language Integrated learning (CLIL) methodology within the process of teaching English on the Example of Georgian private high schools. Taking the Georgian reality and cultural features into account, the modified version of the questionnaire, based on the classification of using CS in ESL Classroom proposed By Ferguson (2009) was used. The qualitative research revealed students’ and teacher’s attitudes towards teacher’s code-switching in CLIL lesson. Both qualitative and quantitative research were conducted: the observations of the teacher’s lessons (Recording of T’s online lessons), interview and the questionnaire among Math’s T’s 20 high school students. We came to the several conclusions, some of them are given here: Math’s teacher’s CS behavior mostly serves (1) the conversational function of interjection; (2) the classroom functions of introducing unfamiliar materials and topics, explaining difficult concepts, maintaining classroom discipline and the structure of the lesson; The teacher and 13 students have negative attitudes towards using only Georgian in teaching Math. The higher level of English is the more negative is attitude towards using Georgian in the classroom. Although all the students were Georgian, their competence in English is higher than in Georgian, therefore they consider English as an inseparable part of their identities. The overall results of the case study of teaching Math (Educational discourse) in one of the private schools in Tbilisi will be presented at the conference.Keywords: attitudes, bilingualism, code-switching, CLIL, conversation analysis, interactional sociolinguistics.
Procedia PDF Downloads 1613004 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 1503003 Enhancing Health Information Management with Smart Rings
Authors: Bhavishya Ramchandani
Abstract:
A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.Keywords: smart ring, healthcare, electronic wearable, emerging technology
Procedia PDF Downloads 643002 An Efficient Emitting Supramolecular Material Derived from Calixarene: Synthesis, Optical and Electrochemical Features
Authors: Serkan Sayin, Songul F. Varol
Abstract:
High attention on the organic light-emitting diodes has been paid since their efficient properties in the flat panel displays, and solid-state lighting was realized. Because of their high efficient electroluminescence, brightness and providing eminent in the emission range, organic light emitting diodes have been preferred a material compared with the other materials consisting of the liquid crystal. Calixarenes obtained from the reaction of p-tert-butyl phenol and formaldehyde in a suitable base have been potentially used in various research area such as catalysis, enzyme immobilization, and applications, ion carrier, sensors, nanoscience, etc. In addition, their tremendous frameworks, as well as their easily functionalization, make them an effective candidate in the applied chemistry. Herein, a calix[4]arene derivative has been synthesized, and its structure has been fully characterized using Fourier Transform Infrared Spectrophotometer (FTIR), proton nuclear magnetic resonance (¹H-NMR), carbon-13 nuclear magnetic resonance (¹³C-NMR), liquid chromatography-mass spectrometry (LC-MS), and elemental analysis techniques. The calixarene derivative has been employed as an emitting layer in the fabrication of the organic light-emitting diodes. The optical and electrochemical features of calixarane-contained organic light-emitting diodes (Clx-OLED) have been also performed. The results showed that Clx-OLED exhibited blue emission and high external quantum efficacy. As a conclusion obtained results attributed that the synthesized calixarane derivative is a promising chromophore with efficient fluorescent quantum yield that provides it an attractive candidate for fabricating effective materials for fluorescent probes and labeling studies. This study was financially supported by the Scientific and Technological Research Council of Turkey (TUBITAK Grant no. 117Z402).Keywords: calixarene, OLED, supramolecular chemistry, synthesis
Procedia PDF Downloads 2533001 System Identification of Building Structures with Continuous Modeling
Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab
Abstract:
This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction
Procedia PDF Downloads 2333000 The Representation of Migrants in the UK and Saudi Arabia Press: A Cross-Linguistic Discourse Analysis Study
Authors: Eman Alatawi
Abstract:
The world is currently experiencing an upsurge in the number of international migrants, which has reached 281 million worldwide; in particular, both the UK and Saudi Arabia have recently been faced with an unprecedented number of immigrants. As a result, the media in these two countries is constantly posting news about the issue, and newspapers, in particular, play a vital role in shaping the public’s view of immigration issues. Because the media is an influential tool in society, it has the ability to construct a specific image of migrants and influence public opinion concerning immigrant groups. However, most of the existing studies have addressed the plight of migrants in the UK, Europe, and the US, and few have considered the Middle East; specifically, there is a pressing need for studies that focus on the press in Saudi Arabia, which is one of the main countries that is experiencing immigration at a tremendous rate. This paper employs critical discourse analysis (CDA) to examine the depiction of migrants in the British and Saudi Arabian media in order to explore the involvement of three linguistic features in the media’s representation of migrant-related topics. These linguistic features are the names, metaphors, and collocations that the press in the UK and in Saudi Arabia uses to describe migrants; the impact of these depictions is also considered. This comparative study could create a better understanding of how the Saudi Arabian press presents the topic of migrants and immigration, which will assist in extending the understanding of migration discourses beyond an Anglo-centric viewpoint. The main finding of this study was that both British and Saudi Arabian newspapers tended to represent migrants’ issues by painting migrants in a negative light through the use of negative references or names, metaphors, and collocations; furthermore, the media’s negative stereotyping of migrants was found to be consistent, which could have an influence on the public’s opinion of these minority groups. Such observations show that the issue is not as simple as individuals, press systems, or political affiliations.Keywords: representation, migrants, the UK press, Saudi Arabia press, cross-linguistic, discourse analysis
Procedia PDF Downloads 802999 Syntax and Words as Evolutionary Characters in Comparative Linguistics
Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss
Abstract:
In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods
Procedia PDF Downloads 1542998 Use of Smartphones in 6th and 7th Grade (Elementary Schools) in Istria: Pilot Study
Authors: Maja Ruzic-Baf, Vedrana Keteles, Andrea Debeljuh
Abstract:
Younger and younger children are now using a smartphone, a device which has become ‘a must have’ and the life of children would be almost ‘unthinkable’ without one. Devices are becoming lighter and lighter but offering an array of options and applications as well as the unavoidable access to the Internet, without which it would be almost unusable. Numerous features such as taking of photographs, listening to music, information search on the Internet, access to social networks, usage of some of the chatting and messaging services, are only some of the numerous features offered by ‘smart’ devices. They have replaced the alarm clock, home phone, camera, tablet and other devices. Their use and possession have become a part of the everyday image of young people. Apart from the positive aspects, the use of smartphones has also some downsides. For instance, free time was usually spent in nature, playing, doing sports or other activities enabling children an adequate psychophysiological growth and development. The greater usage of smartphones during classes to check statuses on social networks, message your friends, play online games, are just some of the possible negative aspects of their application. Considering that the age of the population using smartphones is decreasing and that smartphones are no longer ‘foreign’ to children of pre-school age (smartphones are used at home or in coffee shops or shopping centers while waiting for their parents, playing video games often inappropriate to their age), particular attention must be paid to a very sensitive group, the teenagers who almost never separate from their ‘pets’. This paper is divided into two sections, theoretical and empirical ones. The theoretical section gives an overview of the pros and cons of the usage of smartphones, while the empirical section presents the results of a research conducted in three elementary schools regarding the usage of smartphones and, specifically, their usage during classes, during breaks and to search information on the Internet, check status updates and 'likes’ on the Facebook social network.Keywords: education, smartphone, social networks, teenagers
Procedia PDF Downloads 4532997 Disaster Response Training Simulator Based on Augmented Reality, Virtual Reality, and MPEG-DASH
Authors: Sunho Seo, Younghwan Shin, Jong-Hong Park, Sooeun Song, Junsung Kim, Jusik Yun, Yongkyun Kim, Jong-Moon Chung
Abstract:
In order to effectively cope with large and complex disasters, disaster response training is needed. Recently, disaster response training led by the ROK (Republic of Korea) government is being implemented through a 4 year R&D project, which has several similar functions as the HSEEP (Homeland Security Exercise and Evaluation Program) of the United States, but also has several different features as well. Due to the unpredictiveness and diversity of disasters, existing training methods have many limitations in providing experience in the efficient use of disaster incident response and recovery resources. Always, the challenge is to be as efficient and effective as possible using the limited human and material/physical resources available based on the given time and environmental circumstances. To enable repeated training under diverse scenarios, an AR (Augmented Reality) and VR (Virtual Reality) combined simulator is under development. Unlike existing disaster response training, simulator based training (that allows remote login simultaneous multi-user training) enables freedom from limitations in time and space constraints, and can be repeatedly trained with different combinations of functions and disaster situations. There are related systems such as ADMS (Advanced Disaster Management Simulator) developed by ETC simulation and HLS2 (Homeland Security Simulation System) developed by ELBIT system. However, the ROK government needs a simulator custom made to the country's environment and disaster types, and also combines the latest information and communication technologies, which include AR, VR, and MPEG-DASH (Moving Picture Experts Group - Dynamic Adaptive Streaming over HTTP) technology. In this paper, a new disaster response training simulator is proposed to overcome the limitation of existing training systems, and adapted to actual disaster situations in the ROK, where several technical features are described.Keywords: augmented reality, emergency response training simulator, MPEG-DASH, virtual reality
Procedia PDF Downloads 3012996 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses
Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh
Abstract:
Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications
Procedia PDF Downloads 3172995 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 113