Search results for: EEG derived features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6261

Search results for: EEG derived features

4761 A DFT-Based QSARs Study of Kovats Retention Indices of Adamantane Derivatives

Authors: Z. Bayat

Abstract:

A quantitative structure–property relationship (QSPR) study was performed to develop models those relate the structures of 65 Kovats retention index (RI) of adamantane derivatives. Molecular descriptors derived solely from 3D structures of the molecular compounds. The usefulness of the quantum chemical descriptors, calculated at the level of the DFT theories using 6-311+G** basis set for QSAR study of adamantane derivatives was examined. The use of descriptors calculated only from molecular structure eliminates the need to experimental determination of properties for use in the correlation and allows for the estimation of RI for molecules not yet synthesized. The prediction results are in good agreement with the experimental value. A multi-parametric equation containing maximum Four descriptors at B3LYP/6-31+G** method with good statistical qualities (R2train=0.913, Ftrain=97.67, R2test=0.770, Ftest=3.21, Q2LOO=0.895, R2adj=0.904, Q2LGO=0.844) was obtained by Multiple Linear Regression using stepwise method.

Keywords: DFT, adamantane, QSAR, Kovat

Procedia PDF Downloads 363
4760 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 72
4759 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach

Authors: Jiaxin Chen

Abstract:

Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.

Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification

Procedia PDF Downloads 86
4758 Photocatalytic Eco-Active Ceramic Slabs to Abate Air Pollution under LED Light

Authors: Claudia L. Bianchi, Giuseppina Cerrato, Federico Galli, Federica Minozzi, Valentino Capucci

Abstract:

At the beginning of the industrial productions, porcelain gres tiles were considered as just a technical material, aesthetically not very beautiful. Today thanks to new industrial production methods, both properties, and beauty of these materials completely fit the market requests. In particular, the possibility to prepare slabs of large sizes is the new frontier of building materials. Beside these noteworthy architectural features, new surface properties have been introduced in the last generation of these materials. In particular, deposition of TiO₂ transforms the traditional ceramic into a photocatalytic eco-active material able to reduce polluting molecules present in air and water, to eliminate bacteria and to reduce the surface dirt thanks to the self-cleaning property. The problem of photocatalytic materials resides in the fact that it is necessary a UV light source to activate the oxidation processes on the surface of the material, processes that are turned off inexorably when the material is illuminated by LED lights and, even more so, when we are in darkness. First, it was necessary a thorough study change the existing plants to deposit the photocatalyst very evenly and this has been done thanks to the advent of digital printing and the development of an ink custom-made that stabilizes the powdered TiO₂ in its formulation. In addition, the commercial TiO₂, which is used for the traditional photocatalytic coating, has been doped with metals in order to activate it even in the visible region and thus in the presence of sunlight or LED. Thanks to this active coating, ceramic slabs are able to purify air eliminating odors and VOCs, and also can be cleaned with very soft detergents due to the self-cleaning properties given by the TiO₂ present at the ceramic surface. Moreover, the presence of dopant metals (patent WO2016157155) also allows the material to work as well as antibacterial in the dark, by eliminating one of the negative features of photocatalytic building materials that have so far limited its use on a large scale. Considering that we are constantly in contact with bacteria, some of which are dangerous for health. Active tiles are 99,99% efficient on all bacteria, from the most common such as Escherichia coli to the most dangerous such as Staphilococcus aureus Methicillin-resistant (MRSA). DIGITALIFE project LIFE13 ENV/IT/000140 – award for best project of October 2017.

Keywords: Ag-doped microsized TiO₂, eco-active ceramic, photocatalysis, digital coating

Procedia PDF Downloads 217
4757 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN

Procedia PDF Downloads 325
4756 Assessment of an ICA-Based Method for Detecting the Effect of Attention in the Auditory Late Response

Authors: Siavash Mirahmadizoghi, Steven Bell, David Simpson

Abstract:

In this work a new independent component analysis (ICA) based method for noise reduction in evoked potentials is evaluated on for auditory late responses (ALR) captured with a 63-channel electroencephalogram (EEG) from 10 normal-hearing subjects. The performance of the new method is compared with a single channel alternative in terms of signal to noise ratio (SNR), the number of channels with an SNR above an empirically derived statistical critical value and an estimate of the effect of attention on the major components in the ALR waveform. The results show that the multichannel signal processing method can significantly enhance the quality of the ALR signal and also detect the effect of the attention on the ALR better than the single channel alternative.

Keywords: auditory late response (ALR), attention, EEG, independent component analysis (ICA), multichannel signal processing

Procedia PDF Downloads 501
4755 Optimum Design of Grillage Systems Using Firefly Algorithm Optimization Method

Authors: F. Erdal, E. Dogan, F. E. Uz

Abstract:

In this study, firefly optimization based optimum design algorithm is presented for the grillage systems. Naming of the algorithm is derived from the fireflies, whose sense of movement is taken as a model in the development of the algorithm. Fireflies’ being unisex and attraction between each other constitute the basis of the algorithm. The design algorithm considers the displacement and strength constraints which are implemented from LRFD-AISC (Load and Resistance Factor Design-American Institute of Steel Construction). It selects the appropriate W (Wide Flange)-sections for the transverse and longitudinal beams of the grillage system among 272 discrete W-section designations given in LRFD-AISC so that the design limitations described in LRFD are satisfied and the weight of the system is confined to be minimal. Number of design examples is considered to demonstrate the efficiency of the algorithm presented.

Keywords: firefly algorithm, steel grillage systems, optimum design, stochastic search techniques

Procedia PDF Downloads 422
4754 Filter for the Measurement of Supraharmonics in Distribution Networks

Authors: Sivaraman Karthikeyan

Abstract:

Due to rapidly developing power electronics devices and technologies such as power line communication or self-commutating converters, voltage and current distortion, as well as interferences, have increased in the frequency range of 2 kHz to 150 kHz; there is an urgent need for regulation of electromagnetic compatibility (EMC) standards in this frequency range. Measuring or testing compliance with emission and immunity limitations necessitates the use of precise, repeatable measuring methods. Appropriate filters to minimize the fundamental component and its harmonics below 2 kHz in the measuring signal would improve the measurement accuracy in this frequency range leading to better analysis. This paper discusses filter suggestions in the current measurement standard and proposes an infinite impulse response (IIR) filter design that is optimized for a low number of poles, strong fundamental damping, and high accuracy above 2 kHz. The new filter’s transfer function is delivered as a result. An analog implementation is derived from the overall design.

Keywords: supraharmonics, 2 kHz, 150 kHz, filter, analog filter

Procedia PDF Downloads 137
4753 The Complete Modal Derivatives

Authors: Sebastian Andersen, Peter N. Poulsen

Abstract:

The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.

Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives

Procedia PDF Downloads 229
4752 About the Number of Fundamental Physical Interactions

Authors: Andrey Angorsky

Abstract:

In the article an issue about the possible number of fundamental physical interactions is studied. The theory of similarity on the dimensionless quantity as the damping ratio serves as the instrument of analysis. The structure with the features of Higgs field comes out from non-commutative expression for this ratio. The experimentally checked up supposition about the nature of dark energy is spoken out.

Keywords: damping ratio, dark energy, dimensionless quantity, fundamental physical interactions, Higgs field, non-commutative expression

Procedia PDF Downloads 132
4751 An Adaptive Cooperative Scheme for Reliability of Transmission Using STBC and CDD in Wireless Communications

Authors: Hyun-Jun Shin, Jae-Jeong Kim, Hyoung-Kyu Song

Abstract:

In broadcasting and cellular system, a cooperative scheme is proposed for the improvement of performance of bit error rate. Up to date, the coverage of broadcasting system coexists with the coverage of cellular system. Therefore each user in a cellular coverage is frequently involved in a broadcasting coverage. The proposed cooperative scheme is derived from the shared areas. The users receive signals from both broadcasting base station and cellular base station. The proposed scheme selects a cellular base station of a worse channel to achieve better performance of bit error rate in cooperation. The performance of the proposed scheme is evaluated in fading channel.

Keywords: cooperative communication, diversity, STBC, CDD, channel condition, broadcasting system, cellular system

Procedia PDF Downloads 503
4750 Exploiting Kinetic and Kinematic Data to Plot Cyclograms for Managing the Rehabilitation Process of BKAs by Applying Neural Networks

Authors: L. Parisi

Abstract:

Kinematic data wisely correlate vector quantities in space to scalar parameters in time to assess the degree of symmetry between the intact limb and the amputated limb with respect to a normal model derived from the gait of control group participants. Furthermore, these particular data allow a doctor to preliminarily evaluate the usefulness of a certain rehabilitation therapy. Kinetic curves allow the analysis of ground reaction forces (GRFs) to assess the appropriateness of human motion. Electromyography (EMG) allows the analysis of the fundamental lower limb force contributions to quantify the level of gait asymmetry. However, the use of this technological tool is expensive and requires patient’s hospitalization. This research work suggests overcoming the above limitations by applying artificial neural networks.

Keywords: kinetics, kinematics, cyclograms, neural networks, transtibial amputation

Procedia PDF Downloads 438
4749 Trusted Neural Network: Reversibility in Neural Networks for Network Integrity Verification

Authors: Malgorzata Schwab, Ashis Kumer Biswas

Abstract:

In this concept paper, we explore the topic of Reversibility in Neural Networks leveraged for Network Integrity Verification and crafted the term ''Trusted Neural Network'' (TNN), paired with the API abstraction around it, to embrace the idea formally. This newly proposed high-level generalizable TNN model builds upon the Invertible Neural Network architecture, trained simultaneously in both forward and reverse directions. This allows for the original system inputs to be compared with the ones reconstructed from the outputs in the reversed flow to assess the integrity of the end-to-end inference flow. The outcome of that assessment is captured as an Integrity Score. Concrete implementation reflecting the needs of specific problem domains can be derived from this general approach and is demonstrated in the experiments. The model aspires to become a useful practice in drafting high-level systems architectures which incorporate AI capabilities.

Keywords: trusted, neural, invertible, API

Procedia PDF Downloads 140
4748 Google Translate: AI Application

Authors: Shaima Almalhan, Lubna Shukri, Miriam Talal, Safaa Teskieh

Abstract:

Since artificial intelligence is a rapidly evolving topic that has had a significant impact on technical growth and innovation, this paper examines people's awareness, use, and engagement with the Google Translate application. To see how familiar aware users are with the app and its features, quantitative and qualitative research was conducted. The findings revealed that consumers have a high level of confidence in the application and how far people they benefit from this sort of innovation and how convenient it makes communication.

Keywords: artificial intelligence, google translate, speech recognition, language translation, camera translation, speech to text, text to speech

Procedia PDF Downloads 147
4747 Design of Broadband Power Divider for 3G and 4G Applications

Authors: A. M. El-Akhdar, A. M. El-Tager, H. M. El-Hennawy

Abstract:

This paper presents a broadband power divider with equal power division ratio. Two sections of transmission line transformers based on coupled microstrip lines are applied to obtain broadband performance. In addition, design methodology is proposed for the novel structure. A prototype is designed, simulated to operate in the band from 2.1 to 3.8 GHz to fulfill the requirements of 3G and 4G applications. The proposed structure features reduced size and less resistors than other conventional techniques. Simulation verifies the proposed idea and design methodology.

Keywords: power dividers, coupled lines, microstrip, 4G applications

Procedia PDF Downloads 469
4746 Efficiency Improvement of REV-Method for Calibration of Phased Array Antennas

Authors: Daniel Hristov

Abstract:

The paper describes the principle of operation, simulation and physical validation of method for simultaneous acquisition of gain and phase states of multiple antenna elements and the corresponding feed lines across a Phased Array Antenna (PAA). The derived values for gain and phase are used for PAA-calibration. The method utilizes the Rotating-Element Electric- Field Vector (REV) principle currently used for gain and phase state estimation of single antenna element across an active antenna aperture. A significant reduction of procedure execution time is achieved with simultaneous setting of different phase delays to multiple phase shifters, followed by a single power measurement. The initial gain and phase states are calculated using spectral and correlation analysis of the measured power series.

Keywords: antenna, antenna arrays, calibration, phase measurement, power measurement

Procedia PDF Downloads 131
4745 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 121
4744 Formulation of Corrector Methods from 3-Step Hybid Adams Type Methods for the Solution of First Order Ordinary Differential Equation

Authors: Y. A. Yahaya, Ahmad Tijjani Asabe

Abstract:

This paper focuses on the formulation of 3-step hybrid Adams type method for the solution of first order differential equation (ODE). The methods which was derived on both grid and off grid points using multistep collocation schemes and also evaluated at some points to produced Block Adams type method and Adams moulton method respectively. The method with the highest order was selected to serve as the corrector. The convergence was valid and efficient. The numerical experiments were carried out and reveal that hybrid Adams type methods performed better than the conventional Adams moulton method.

Keywords: adam-moulton type (amt), corrector method, off-grid, block method, convergence analysis

Procedia PDF Downloads 619
4743 Artificial Intelligence and Development: The Missing Link

Authors: Driss Kettani

Abstract:

ICT4D actors are naturally attempted to include AI in the range of enabling technologies and tools that could support and boost the Development process, and to refer to these as AI4D. But, doing so, assumes that AI complies with the very specific features of ICT4D context, including, among others, affordability, relevance, openness, and ownership. Clearly, none of these is fulfilled, and the enthusiastic posture that AI4D is a natural part of ICT4D is not grounded and, to certain extent, does not serve the purpose of Technology for Development at all. In the context of Development, it is important to emphasize and prioritize ICT4D, in the national digital transformation strategies, instead of borrowing "trendy" waves of the IT Industry that are motivated by business considerations, with no specific care/consideration to Development.

Keywords: AI, ICT4D, technology for development, position paper

Procedia PDF Downloads 67
4742 Use of Low-Cost Hydrated Hydrogen Sulphate-Based Protic Ionic Liquids for Extraction of Cellulose-Rich Materials from Common Wheat (Triticum Aestivum) Straw

Authors: Chris Miskelly, Eoin Cunningham, Beatrice Smyth, John. D. Holbrey, Gosia Swadzba-Kwasny, Emily L. Byrne, Yoan Delavoux, Mantian Li.

Abstract:

Recently, the use of ionic liquids (ILs) for the preparation of lignocellulose derived cellulosic materials as alternatives to petrochemical feedstocks has been the focus of considerable research interest. While the technical viability of IL-based lignocellulose treatment methodologies has been well established, the high cost of reagents inhibits commercial feasibility. This work aimed to assess the technoeconomic viability of the preparation of cellulose rich materials (CRMs) using protic ionic liquids (PILs) synthesized from low cost alkylamines and sulphuric acid. For this purpose, the tertiary alkylamines, triethylamine, and dimethylbutylamine were selected. Bulk scale production cost of the synthesized PILs, triethylammonium hydrogen sulphate and dimetheylbutylammonium hydrogen sulphate, was reported as $0.78 kg-1 to $1.24 kg-1. CRMs were prepared through the treatment of common wheat (Triticum aestivum) straw with these PILs. By controlling treatment parameters, CRMs with a cellulose content of ≥ 80 wt% were prepared. This was achieved using a T. aestivum straw to PIL loading ratio of 1:15 w/w, a treatment duration of 180 minutes, and ethanol as a cellulose antisolvent. Infrared spectra data and decreased onset degradation temperature of CRMs (ΔTONSET ~ 70 °C) suggested the formation of cellulose sulphate esters during treatment. Chemical derivatisation can aid the dispersion of prepared CRMs in non-polar polymer/ composite matrices, but act as a barrier to thermal processing at temperatures above 150 °C. It was also shown that treatment increased the crystallinity of CRMs (ΔCrI ~ 40 %) without altering the native crystalline structure or crystallite size (~ 2.6 nm) of cellulose; peaks associated with the cellulose I crystalline planes (110), (200), and (004) were observed at Bragg angles 16.0 °, 22.5 ° and 35.0 ° respectively. This highlighted the inability of assessed PILs to dissolve crystalline cellulose and was attributed to the high acidity (pKa ~ - 1.92 to - 6.42) of sulphuric acid derived anions. Electron micrographs revealed that the stratified multilayer tissue structure of untreated T. aestivum straw was significantly modified during treatment. T. aestivum straw particles were disassembled during treatment, with prepared CRMs adopting a golden-brown film-like appearance. This work demonstrated the degradation of non-cellulosic fractions of lignocellulose without dissolution of cellulose. It is the first to report on the derivatisation of cellulose during treatment with protic hydrogen sulphate ionic liquids, and the potential implications of this with reference to biopolymer feedstock preparation.

Keywords: cellulose, extraction, protic ionic liquids, esterification, thermal stability, waste valorisation, biopolymer feedstock

Procedia PDF Downloads 21
4741 Effectiveness of Software Quality Assurance in Offshore Development Enterprises in Sri Lanka

Authors: Malinda Gayan Sirisena

Abstract:

The aim of this research is to evaluate the effectiveness of software quality assurance approaches of Sri Lankan offshore software development organizations, and to propose a framework which could be used across all offshore software development organizations. An empirical study was conducted using derived framework from popular software quality evaluation models. The research instrument employed was a questionnaire survey among thirty seven Sri Lankan registered offshore software development organizations. The findings demonstrate a positive view of Effectiveness of Software Quality Assurance – the stronger predictors of Stability, Installability, Correctness, Testability and Changeability. The present study’s recommendations indicate a need for much emphasis on software quality assurance for the Sri Lankan offshore software development organizations.

Keywords: software quality assurance (SQA), offshore software development, quality assurance evaluation models, effectiveness of quality assurance

Procedia PDF Downloads 414
4740 Optimal Maintenance and Improvement Policies in Water Distribution System: Markov Decision Process Approach

Authors: Jong Woo Kim, Go Bong Choi, Sang Hwan Son, Dae Shik Kim, Jung Chul Suh, Jong Min Lee

Abstract:

The Markov Decision Process (MDP) based methodology is implemented in order to establish the optimal schedule which minimizes the cost. Formulation of MDP problem is presented using the information about the current state of pipe, improvement cost, failure cost and pipe deterioration model. The objective function and detailed algorithm of dynamic programming (DP) are modified due to the difficulty of implementing the conventional DP approaches. The optimal schedule derived from suggested model is compared to several policies via Monte Carlo simulation. Validity of the solution and improvement in computational time are proved.

Keywords: Markov decision processes, dynamic programming, Monte Carlo simulation, periodic replacement, Weibull distribution

Procedia PDF Downloads 416
4739 Plasmablastic Lymphoma a New Entity in Patients with HIV Infections

Authors: Rojith K. Balakrishnan

Abstract:

Plasmablastic lymphoma (PBL) is an uncommon, recently described B-cell derived lymphoma that is most commonly seen in patients with Human Immunodeficiency Virus (HIV) infection. Here we report a case of PBL in a 35 year old man with HIV who presented with multiple subcutaneous swellings all over the body and oral mucosal lesions.The biopsy report was suggestive of Diffuse Large B Cell Lymphoma. Immunohistochemistry was done which showed, lymphoma cells, positive for MUM1, CD 138, and VS 38. The proliferation index (MIB) was 95%. Final report was consistent with the diagnosis of Plasmablastic Lymphoma. The lesion completely regressed after treatment with systemic chemotherapy. Up to date, only a few cases of plasmablastic lymphoma have been reported from India. Increased frequency of this lymphoma in HIV patients and rarity of the tumour, along with rapid response of the same to chemotherapy, make this case a unique one. Hence the knowledge about this new entity is important for clinicians who deal with HIV patients.

Keywords: human immunodeficiency virus (HIV), oral cavity lesion, plasmablastic lymphoma, subcutaneous swelling

Procedia PDF Downloads 264
4738 OmniDrive Model of a Holonomic Mobile Robot

Authors: Hussein Altartouri

Abstract:

In this paper the kinematic and kinetic models of an omnidirectional holonomic mobile robot is presented. The kinematic and kinetic models form the OmniDrive model. Therefore, a mathematical model for the robot equipped with three- omnidirectional wheels is derived. This model which takes into consideration the kinematics and kinetics of the robot, is developed to state space representation. Relative analysis of the velocities and displacements is used for the kinematics of the robot. Lagrange’s approach is considered in this study for deriving the equation of motion. The drive train and the mechanical assembly only of the Festo Robotino® is considered in this model. Mainly the model is developed for motion control. Furthermore, the model can be used for simulation purposes in different virtual environments not only Robotino® View. Further use of the model is in the mechatronics research fields with the aim of teaching and learning the advanced control theories.

Keywords: mobile robot, omni-direction wheel, mathematical model, holonomic mobile robot

Procedia PDF Downloads 591
4737 Measuring Biobased Content of Building Materials Using Carbon-14 Testing

Authors: Haley Gershon

Abstract:

The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.

Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials

Procedia PDF Downloads 156
4736 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 68
4735 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 181
4734 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 157
4733 Great Art for Little Children - Games in School Education as Integration of Polish-Language, Eurhythmics, Artistic and Mathematical Subject Matter

Authors: Małgorzata Anna Karczmarzyk

Abstract:

Who is the contemporary child? What are his/her distinctive features making him/her different from earlier generations? And how to teach in the dissimilar social reality? These questions will constitute the key to my reflections on contemporary early school education. For, to my mind, games have become highly significant for the modern model of education. There arise publications and research employing games to increase competence both in business, tutoring, or coaching, as well as in academic education . Thanks to games students and subordinates can be taught such abilities as problem thinking, creativity, consistent fulfillment of goals, resourcefulness and skills of communication.

Keywords: games, art, children, school education, integration

Procedia PDF Downloads 847
4732 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique

Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan

Abstract:

In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.

Keywords: power spectral density, 3D EEG model, brain balancing, kNN

Procedia PDF Downloads 478