Search results for: features comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8718

Search results for: features comparison

7278 Alterations of Molecular Characteristics of Polyethylene under the Influence of External Effects

Authors: Vigen Barkhudaryan

Abstract:

The influence of external effects (γ-, UV–radiations, high temperature) in presence of air oxygen on structural transformations of low-density polyethylene (LDPE) have been investigated dependent on the polymers’ thickness, the intensity and the dose of external actions. The methods of viscosimetry, light scattering, turbidimetry and gelation measuring were used for this purpose. The comparison of influence of external effects on LDPE shows, that the destruction and cross-linking processes of macromolecules proceed simultaneously with all kinds of external effects. A remarkable growth of average molecular mass of LDPE along with the irradiation doses and heat treatment exposure growth was established. It was linear for the mass average molecular mass and at the initial doses is mainly the result of the increase of the macromolecular branching. As a result, the macromolecular hydrodynamic volumes have been changed, and therefore the dependence of viscosity average molecular mass on the doses was going through the minimum at initial doses. A significant change of molecular mass, sizes and shape of macromolecules of LDPE occurs under the influence of external effects. The influence is limited only by diffusion of oxygen during -irradiation and heat treatment. At UV–irradiation the influence is limited both by diffusion of oxygen and penetration of radiation. Consequently, the molecular transformations are deeper and evident in case of -irradiation, as soon as the polymer is transformed in a whole volume. It was also established, that the mechanism of molecular transformations in polymers from the surface layer distinctly differs from those of the sample deeper layer. A comparison of the results of these investigations allows us to conclude, that the mechanisms of influence of investigated external effects on polyethylene are similar.

Keywords: cross-linking, destruction, high temperature, LDPE, γ-radiations, UV-radiations

Procedia PDF Downloads 318
7277 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 92
7276 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 264
7275 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace

Authors: Paolo Lenzuni

Abstract:

Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.

Keywords: cost-effectiveness, noise, occupational exposure, treatment

Procedia PDF Downloads 324
7274 Lithuanian Sign Language Literature: Metaphors at the Phonological Level

Authors: Anželika Teresė

Abstract:

In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.

Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics

Procedia PDF Downloads 137
7273 Contribution at Dimensioning of the Energy Dissipation Basin

Authors: M. Aouimeur

Abstract:

The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.

Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment

Procedia PDF Downloads 584
7272 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 129
7271 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 348
7270 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 202
7269 Sociolinguistic Aspects and Language Contact, Lexical Consequences in Francoprovençal Settings

Authors: Carmela Perta

Abstract:

In Italy the coexistence of standard language, its varieties and different minority languages - historical and migration languages - has been a way to study language contact in different directions; the focus of most of the studies is either the relations among the languages of the social repertoire, or the study of contact phenomena occurring in a particular structural level. However, studies on contact facts in relation to a given sociolinguistic situation of the speech community are still not present in literature. As regard the language level to investigate from the perspective of contact, it is commonly claimed that the lexicon is the most volatile part of language and most likely to undergo change due to superstrate influence, indeed first lexical features are borrowed, then, under long term cultural pressure, structural features may also be borrowed. The aim of this paper is to analyse language contact in two historical minority communities where Francoprovençal is spoken, in relation to their sociolinguistic situation. In this perspective, firstly lexical borrowings present in speakers’ speech production will be examined, trying to find a possible correlation between this part of the lexicon and informants’ sociolinguistic variables; secondly a possible correlation between a particular community sociolinguistic situation and lexical borrowing will be found. Methods used to collect data are based on the results obtained from 24 speakers in both the villages; the speaker group in the two communities consisted of 3 males and 3 females in each of four age groups, ranging in age from 9 to 85, and then divided into five groups according to their occupations. Speakers were asked to describe a sequence of pictures naming common objects and then describing scenes when they used these objects: they are common objects, frequently pronounced and belonging to semantic areas which are usually resistant and which are thought to survive. A subset of this task, involving 19 items with Italian source is examined here: in order to determine the significance of the independent variables (social factors) on the dependent variable (lexical variation) the statistical package SPSS, particularly the linear regression, was used.

Keywords: borrowing, Francoprovençal, language change, lexicon

Procedia PDF Downloads 373
7268 Role of Symbolism in the Journey towards Spirituality: A Case Study of Mosque Architecture in Bahrain

Authors: Ayesha Agha Shah

Abstract:

The purpose of a mosque or a place of worship is to build a spiritual relation with God. If the sense of spirituality is not achieved, then sacred architecture appears to be lacking depth. Form and space play a significant role to enhance the architectural quality to impart a divine feel to a place. To achieve this divine feeling, form and space, and unity of opposites, either abstract or symbolic can be employed. It is challenging to imbue the emptiness of a space with qualitative experience. Mosque architecture mostly entails traditional forms and design typology. This approach for Muslim worship produces distinct landmarks in the urban neighborhoods of Muslim societies, while creating a great sense of spirituality. The universal symbolic characters in the mosque architecture had prototype geometrical forms for a long time in history. However, modern mosques have deviated from this approach to employ different built elements and symbolism, which are often hard to be identified as related to mosques or even as Islamic. This research aims to explore the sense of spirituality in modern mosques and questions whether the modification of geometrical features produce spirituality in the same manner. The research also seeks to investigate the role of ‘geometry’ in the modern mosque architecture. The research employs the analytical study of some modern mosque examples in the Kingdom of Bahrain, reflecting on the geometry and symbolism adopted in the new mosque architecture design. It buttresses the analysis by the engagement of people’s perceptions derived using a survey of opinions. The research expects to see the significance of geometrical architectural elements in the mosque designs. It will find answers to the questions such as; what is the role of the form of the mosque, interior spaces and the effect of the modified symbolic features in the modern mosque design? How can the symbolic geometry, forms and spaces of a mosque invite a believer to leave the worldly environment behind and move towards spirituality?

Keywords: geometry, mosque architecture, spirituality, symbolism

Procedia PDF Downloads 115
7267 Sociolinguistic and Classroom Functions of Using Code-Switching in CLIL Context

Authors: Khatuna Buskivadze

Abstract:

The aim of the present study is to investigate the sociolinguistic and classroom functions and frequency of Teacher’s Code Switching (CS) in the Content and Language Integrated (CLIL) Lesson. Nowadays, Georgian society struggles to become the part of the European world, the English language itself plays a role in forming new generations with European values. Based on our research conducted in 2019, out of all 114 private schools in Tbilisi, full- programs of CLIL are taught in 7 schools, while only some subjects using CLIL are conducted in 3 schools. The goal of the former research was to define the features of Content and Language Integrated learning (CLIL) methodology within the process of teaching English on the Example of Georgian private high schools. Taking the Georgian reality and cultural features into account, the modified version of the questionnaire, based on the classification of using CS in ESL Classroom proposed By Ferguson (2009) was used. The qualitative research revealed students’ and teacher’s attitudes towards teacher’s code-switching in CLIL lesson. Both qualitative and quantitative research were conducted: the observations of the teacher’s lessons (Recording of T’s online lessons), interview and the questionnaire among Math’s T’s 20 high school students. We came to the several conclusions, some of them are given here: Math’s teacher’s CS behavior mostly serves (1) the conversational function of interjection; (2) the classroom functions of introducing unfamiliar materials and topics, explaining difficult concepts, maintaining classroom discipline and the structure of the lesson; The teacher and 13 students have negative attitudes towards using only Georgian in teaching Math. The higher level of English is the more negative is attitude towards using Georgian in the classroom. Although all the students were Georgian, their competence in English is higher than in Georgian, therefore they consider English as an inseparable part of their identities. The overall results of the case study of teaching Math (Educational discourse) in one of the private schools in Tbilisi will be presented at the conference.

Keywords: attitudes, bilingualism, code-switching, CLIL, conversation analysis, interactional sociolinguistics.

Procedia PDF Downloads 162
7266 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 152
7265 Enhancing Health Information Management with Smart Rings

Authors: Bhavishya Ramchandani

Abstract:

A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.

Keywords: smart ring, healthcare, electronic wearable, emerging technology

Procedia PDF Downloads 64
7264 An Efficient Emitting Supramolecular Material Derived from Calixarene: Synthesis, Optical and Electrochemical Features

Authors: Serkan Sayin, Songul F. Varol

Abstract:

High attention on the organic light-emitting diodes has been paid since their efficient properties in the flat panel displays, and solid-state lighting was realized. Because of their high efficient electroluminescence, brightness and providing eminent in the emission range, organic light emitting diodes have been preferred a material compared with the other materials consisting of the liquid crystal. Calixarenes obtained from the reaction of p-tert-butyl phenol and formaldehyde in a suitable base have been potentially used in various research area such as catalysis, enzyme immobilization, and applications, ion carrier, sensors, nanoscience, etc. In addition, their tremendous frameworks, as well as their easily functionalization, make them an effective candidate in the applied chemistry. Herein, a calix[4]arene derivative has been synthesized, and its structure has been fully characterized using Fourier Transform Infrared Spectrophotometer (FTIR), proton nuclear magnetic resonance (¹H-NMR), carbon-13 nuclear magnetic resonance (¹³C-NMR), liquid chromatography-mass spectrometry (LC-MS), and elemental analysis techniques. The calixarene derivative has been employed as an emitting layer in the fabrication of the organic light-emitting diodes. The optical and electrochemical features of calixarane-contained organic light-emitting diodes (Clx-OLED) have been also performed. The results showed that Clx-OLED exhibited blue emission and high external quantum efficacy. As a conclusion obtained results attributed that the synthesized calixarane derivative is a promising chromophore with efficient fluorescent quantum yield that provides it an attractive candidate for fabricating effective materials for fluorescent probes and labeling studies. This study was financially supported by the Scientific and Technological Research Council of Turkey (TUBITAK Grant no. 117Z402).

Keywords: calixarene, OLED, supramolecular chemistry, synthesis

Procedia PDF Downloads 254
7263 WWSE School Development in German Christian Schools Revisited: Organizational Development Taken to a Test

Authors: Marco Sewald

Abstract:

WWSE School Development (Wahrnehmungs- und wertorientierte Schulentwicklung) contains surveys on pupils, teachers and parents and enables schools to align the development to the requirements mentioned by these three stakeholders. WWSE includes a derivative set of questions for Christian schools, meeting their specific needs. The conducted research on WWSE is reflecting contemporary questions on school development, questioning the quality of the implementation of the results of past surveys, delivered by WWSE School Development in Christian schools in Germany. The research focused on questions connected to organizational development, including leadership and change management. This is done contoured to the two other areas of WWSE: human resources development and development of school teaching methods. The chosen research methods are: (1) A quantitative triangulation on three sets of data. Data from a past evaluation taken in 2011, data from a second evaluation covering the same school conducted in 2014 and a structured survey among the teachers, headmasters and members of the school board taken within the research. (2) Interviews with teachers and headmasters have been conducted during the research as a second stage to fortify the result of the quantitative first stage. Results: WWSE is supporting modern school development. While organizational development, leadership, and change management are proofed to be important for modern school development, these areas are widespread underestimated by teachers and headmasters. Especially in comparison to the field of human resource development and to an even bigger extent in comparison to the area of development of school teaching methods. The research concluded, that additional efforts in the area of organizational development are necessary to meet modern demands and the research also shows which areas are the most important ones.

Keywords: school as a social organization, school development, school leadership, WWSE, Wahrnehmungs- und wertorientierte Schulentwicklung

Procedia PDF Downloads 227
7262 System Identification of Building Structures with Continuous Modeling

Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab

Abstract:

This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.

Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction

Procedia PDF Downloads 234
7261 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 130
7260 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production

Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia

Abstract:

Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.

Keywords: direct steam generation, parabolic trough collectors, Ppressure drop, empirical models

Procedia PDF Downloads 143
7259 The Representation of Migrants in the UK and Saudi Arabia Press: A Cross-Linguistic Discourse Analysis Study

Authors: Eman Alatawi

Abstract:

The world is currently experiencing an upsurge in the number of international migrants, which has reached 281 million worldwide; in particular, both the UK and Saudi Arabia have recently been faced with an unprecedented number of immigrants. As a result, the media in these two countries is constantly posting news about the issue, and newspapers, in particular, play a vital role in shaping the public’s view of immigration issues. Because the media is an influential tool in society, it has the ability to construct a specific image of migrants and influence public opinion concerning immigrant groups. However, most of the existing studies have addressed the plight of migrants in the UK, Europe, and the US, and few have considered the Middle East; specifically, there is a pressing need for studies that focus on the press in Saudi Arabia, which is one of the main countries that is experiencing immigration at a tremendous rate. This paper employs critical discourse analysis (CDA) to examine the depiction of migrants in the British and Saudi Arabian media in order to explore the involvement of three linguistic features in the media’s representation of migrant-related topics. These linguistic features are the names, metaphors, and collocations that the press in the UK and in Saudi Arabia uses to describe migrants; the impact of these depictions is also considered. This comparative study could create a better understanding of how the Saudi Arabian press presents the topic of migrants and immigration, which will assist in extending the understanding of migration discourses beyond an Anglo-centric viewpoint. The main finding of this study was that both British and Saudi Arabian newspapers tended to represent migrants’ issues by painting migrants in a negative light through the use of negative references or names, metaphors, and collocations; furthermore, the media’s negative stereotyping of migrants was found to be consistent, which could have an influence on the public’s opinion of these minority groups. Such observations show that the issue is not as simple as individuals, press systems, or political affiliations.

Keywords: representation, migrants, the UK press, Saudi Arabia press, cross-linguistic, discourse analysis

Procedia PDF Downloads 81
7258 CAGE Questionnaire as a Screening Tool for Hazardous Drinking in an Acute Admissions Ward: Frequency of Application and Comparison with AUDIT-C Questionnaire

Authors: Ammar Ayad Issa Al-Rifaie, Zuhreya Muazu, Maysam Ali Abdulwahid, Dermot Gleeson

Abstract:

The aim of this audit was to examine the efficiency of alcohol history documentation and screening for hazardous drinkers at the Medical Admission Unit (MAU) of Northern General Hospital (NGH), Sheffield, to identify any potential for enhancing clinical practice. Data were collected from medical clerking sheets, ICE system and directly from 82 patients by three junior medical doctors using both CAGE questionnaire and AUDIT-C tool for newly admitted patients to MAU in NGH, in the period between January and March 2015. Alcohol consumption was documented in around two-third of the patient sample and this was documented fairly accurately by health care professionals. Some used subjective words such as 'social drinking' in the alcohol units’ section of the history. CAGE questionnaire was applied to only four patients and none of the patients had documented advice, education or referral to an alcohol liaison team. AUDIT-C tool had identified 30.4%, while CAGE 10.9%, of patients admitted to the NGH MAU as hazardous drinkers. The amount of alcohol the patient consumes positively correlated with the score of AUDIT-C (Pearson correlation 0.83). Re-audit is planned to be carried out after integrating AUDIT-C tool as labels in the notes and presenting a brief teaching session to junior doctors. Alcohol misuse screening is not adequately undertaken and no appropriate action is being offered to hazardous drinkers. CAGE questionnaire is poorly applied to patients and when satisfactory and adequately used has low sensitivity to detect hazardous drinkers in comparison with AUDIT-C tool. Re-audit of alcohol screening practice after introducing AUDIT-C tool in clerking sheets (as labels) is required to compare the findings and conclude the audit cycle.

Keywords: alcohol screening, AUDIT-C, CAGE, hazardous drinking

Procedia PDF Downloads 411
7257 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia

Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez

Abstract:

In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.

Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment

Procedia PDF Downloads 60
7256 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 154
7255 Comparison of the Results of a Parkinson’s Holter Monitor with Patient Diaries, in Real Conditions of Use: A Sub-Analysis of the MoMoPa-EC Clinical Trial

Authors: Alejandro Rodríguez-Molinero, Carlos Pérez-López, Jorge Hernández-Vara, Àngels Bayes-Rusiñol, Juan Carlos Martínez-Castrillo, David A. Pérez-Martínez

Abstract:

Background: Monitoring motor symptoms in Parkinson's patients is often a complex and time-consuming task for clinicians, as Hauser's diaries are often poorly completed by patients. Recently, new automatic devices (Parkinson's holter: STAT-ON®) have been developed capable of monitoring patients' motor fluctuations. The MoMoPa-EC clinical trial (NCT04176302) investigates which of the two methods produces better clinical results. In this sub-analysis, the concordance between both methods is analyzed. Methods: In the MoMoPa-EC clinical trial, 164 patients with moderate-severe Parkinson's disease and at least two hours a day of Off will be included. At the time of patient recruitment, all of them completed a seven-day motor fluctuation diary at home (Hauser’s diary) while wearing the Parkinson's holter. In this sub-analysis, 71 patients with complete data for the purpose of this comparison were included. The intraclass correlation coefficient was calculated between the patient diary entries and the Parkinson's holter data in terms of time On, Off, and time with dyskinesias. Results: The intra-class correlation coefficient of both methods was 0.57 (95% CI: 0.3-0.74) for daily time in Off (%), 0.48 (95% CI: 0.14-0.68) for daily time in On (%), and 0.37 (95% CI %: -0.04-0.62) for daily time with dyskinesias (%). Conclusions: Both methods have a moderate agreement with each other. We will have to wait for the results of the MoMoPa-EC project to estimate which of them has the greatest clinical benefits. Acknowledgment: This work is supported by AbbVie S.L.U, the Instituto de Salud Carlos III [DTS17/00195], and the European Fund for Regional Development, 'A way to make Europe'.

Keywords: Parkinson, sensor, motor fluctuations, dyskinesia

Procedia PDF Downloads 233
7254 Use of Smartphones in 6th and 7th Grade (Elementary Schools) in Istria: Pilot Study

Authors: Maja Ruzic-Baf, Vedrana Keteles, Andrea Debeljuh

Abstract:

Younger and younger children are now using a smartphone, a device which has become ‘a must have’ and the life of children would be almost ‘unthinkable’ without one. Devices are becoming lighter and lighter but offering an array of options and applications as well as the unavoidable access to the Internet, without which it would be almost unusable. Numerous features such as taking of photographs, listening to music, information search on the Internet, access to social networks, usage of some of the chatting and messaging services, are only some of the numerous features offered by ‘smart’ devices. They have replaced the alarm clock, home phone, camera, tablet and other devices. Their use and possession have become a part of the everyday image of young people. Apart from the positive aspects, the use of smartphones has also some downsides. For instance, free time was usually spent in nature, playing, doing sports or other activities enabling children an adequate psychophysiological growth and development. The greater usage of smartphones during classes to check statuses on social networks, message your friends, play online games, are just some of the possible negative aspects of their application. Considering that the age of the population using smartphones is decreasing and that smartphones are no longer ‘foreign’ to children of pre-school age (smartphones are used at home or in coffee shops or shopping centers while waiting for their parents, playing video games often inappropriate to their age), particular attention must be paid to a very sensitive group, the teenagers who almost never separate from their ‘pets’. This paper is divided into two sections, theoretical and empirical ones. The theoretical section gives an overview of the pros and cons of the usage of smartphones, while the empirical section presents the results of a research conducted in three elementary schools regarding the usage of smartphones and, specifically, their usage during classes, during breaks and to search information on the Internet, check status updates and 'likes’ on the Facebook social network.

Keywords: education, smartphone, social networks, teenagers

Procedia PDF Downloads 454
7253 Disaster Response Training Simulator Based on Augmented Reality, Virtual Reality, and MPEG-DASH

Authors: Sunho Seo, Younghwan Shin, Jong-Hong Park, Sooeun Song, Junsung Kim, Jusik Yun, Yongkyun Kim, Jong-Moon Chung

Abstract:

In order to effectively cope with large and complex disasters, disaster response training is needed. Recently, disaster response training led by the ROK (Republic of Korea) government is being implemented through a 4 year R&D project, which has several similar functions as the HSEEP (Homeland Security Exercise and Evaluation Program) of the United States, but also has several different features as well. Due to the unpredictiveness and diversity of disasters, existing training methods have many limitations in providing experience in the efficient use of disaster incident response and recovery resources. Always, the challenge is to be as efficient and effective as possible using the limited human and material/physical resources available based on the given time and environmental circumstances. To enable repeated training under diverse scenarios, an AR (Augmented Reality) and VR (Virtual Reality) combined simulator is under development. Unlike existing disaster response training, simulator based training (that allows remote login simultaneous multi-user training) enables freedom from limitations in time and space constraints, and can be repeatedly trained with different combinations of functions and disaster situations. There are related systems such as ADMS (Advanced Disaster Management Simulator) developed by ETC simulation and HLS2 (Homeland Security Simulation System) developed by ELBIT system. However, the ROK government needs a simulator custom made to the country's environment and disaster types, and also combines the latest information and communication technologies, which include AR, VR, and MPEG-DASH (Moving Picture Experts Group - Dynamic Adaptive Streaming over HTTP) technology. In this paper, a new disaster response training simulator is proposed to overcome the limitation of existing training systems, and adapted to actual disaster situations in the ROK, where several technical features are described.

Keywords: augmented reality, emergency response training simulator, MPEG-DASH, virtual reality

Procedia PDF Downloads 303
7252 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications

Procedia PDF Downloads 318
7251 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 114
7250 The Significance of Islamic Concept of Good Faith to Cure Flaws in Public International Law

Authors: M. A. H. Barry

Abstract:

The concept of Good faith (husn al-niyyah) and fair-dealing (Nadl) are the fundamental guiding elements in all contracts and other agreements under Islamic law. The preaching of Al-Quran and Prophet Muhammad’s (Peace Be upon Him) firmly command people to act in good faith in all dealings. There are several Quran verses and the Prophet’s saying which stressed the significance of dealing honestly and fairly in all transactions. Under the English law, the good faith is not considered a fundamental requirement for the formation of a legal contract. However, the concept of Good Faith in private contracts is recognized by the civil law system and in Article 7(1) of the Convention on International Sale of Goods (CISG-Vienna Convention-1980). It took several centuries for the international trading community to recognize the significance of the concept of good faith for the international sale of goods transactions. Nevertheless, the recognition of good faith in Civil law is only confined for the commercial contracts. Subsequently to the CISG, this concept has made inroads into the private international law. There are submissions in favour of applying the good faith concept to public international law based on tacit recognition by the international conventions and International Tribunals. However, under public international law the concept of good faith is not recognized as a source of rights or obligations. This weakens the spirit of the good faith concept, particularly when determining the international disputes. This also creates a fundamental flaw because the absence of good faith application means the breaches tainted by bad faith are tolerated. The objective of this research is to evaluate, examine and analyze the application of the concept of good faith in the modern laws and identify its limitation, in comparison with Islamic concept of good faith. This paper also identifies the problems and issues connected with the non-application of this concept to public international law. This research consists of three key components (1) the preliminary inquiry (2) subject analysis and discovery of research results, and (3) examining the challenging problems, and concluding with proposals. The preliminary inquiry is based on both the primary and secondary sources. The same sources are used for the subject analysis. This research also has both inductive and deductive features. The Islamic concept of good faith covers all situations and circumstances where the bad faith causes unfairness to the affected parties, especially the weak parties. Under the Islamic law, the concept of good faith is a source of rights and obligations as Islam prohibits any person committing wrongful or delinquent acts in any dealing whether in a private or public life. This rule is applicable not only for individuals but also for institutions, states, and international organizations. This paper explains how the unfairness is caused by non-recognition of the good faith concept as a source of rights or obligations under public international law and provides legal and non-legal reasons to show why the Islamic formulation is important.

Keywords: good faith, the civil law system, the Islamic concept, public international law

Procedia PDF Downloads 149
7249 Creation of S-Box in Blowfish Using AES

Authors: C. Rekha, G. N. Krishnamurthy

Abstract:

This paper attempts to develop a different approach for key scheduling algorithm which uses both Blowfish and AES algorithms. The main drawback of Blowfish algorithm is, it takes more time to create the S-box entries. To overcome this, we are replacing process of S-box creation in blowfish, by using key dependent S-box creation from AES without affecting the basic operation of blowfish. The method proposed in this paper uses good features of blowfish as well as AES and also this paper demonstrates the performance of blowfish and new algorithm by considering different aspects of security namely Encryption Quality, Key Sensitivity, and Correlation of horizontally adjacent pixels in an encrypted image.

Keywords: AES, blowfish, correlation coefficient, encryption quality, key sensitivity, s-box

Procedia PDF Downloads 226