Search results for: feature method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19992

Search results for: feature method

19452 A Superposition Method in Analyses of Clamped Thick Plates

Authors: Alexander Matrosov, Guriy Shirunov

Abstract:

A superposition method based on Lame's idea is used to get a general analytical solution to analyze a stress and strain state of a rectangular isotropjc elastic thick plate. The solution is built by using three solutions of the method of initial functions in the form of double trigonometric series. The results of bending of a thick plate under normal stress on its top face with two opposite sides clamped while others free of load are presented and compared with FEM modelling.

Keywords: general solution, method of initial functions, superposition method, thick isotropic plates

Procedia PDF Downloads 598
19451 Solution of Hybrid Fuzzy Differential Equations

Authors: Mahmood Otadi, Maryam Mosleh

Abstract:

The hybrid differential equations have a wide range of applications in science and engineering. In this paper, the homotopy analysis method (HAM) is applied to obtain the series solution of the hybrid differential equations. Using the homotopy analysis method, it is possible to find the exact solution or an approximate solution of the problem. Comparisons are made between improved predictor-corrector method, homotopy analysis method and the exact solution. Finally, we illustrate our approach by some numerical example.

Keywords: fuzzy number, fuzzy ODE, HAM, approximate method

Procedia PDF Downloads 511
19450 Optimal Control of Volterra Integro-Differential Systems Based on Legendre Wavelets and Collocation Method

Authors: Khosrow Maleknejad, Asyieh Ebrahimzadeh

Abstract:

In this paper, the numerical solution of optimal control problem (OCP) for systems governed by Volterra integro-differential (VID) equation is considered. The method is developed by means of the Legendre wavelet approximation and collocation method. The properties of Legendre wavelet accompany with Gaussian integration method are utilized to reduce the problem to the solution of nonlinear programming one. Some numerical examples are given to confirm the accuracy and ease of implementation of the method.

Keywords: collocation method, Legendre wavelet, optimal control, Volterra integro-differential equation

Procedia PDF Downloads 388
19449 A Class of Third Derivative Four-Step Exponential Fitting Numerical Integrator for Stiff Differential Equations

Authors: Cletus Abhulimen, L. A. Ukpebor

Abstract:

In this paper, we construct a class of four-step third derivative exponential fitting integrator of order six for the numerical integration of stiff initial-value problems of the type: y’= f(x,y); y(x₀) =y₀. The implicit method has free parameters which allow it to be fitted automatically to exponential functions. For the purpose of effective implementation of the proposed method, we adopted the techniques of splitting the method into predictor and corrector schemes. The numerical analysis of the stability of the new method was discussed; the results show that the method is A-stable. Finally, numerical examples are presented, to show the efficiency and accuracy of the new method.

Keywords: third derivative four-step, exponentially fitted, a-stable, stiff differential equations

Procedia PDF Downloads 265
19448 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT

Procedia PDF Downloads 276
19447 Global Optimization: The Alienor Method Mixed with Piyavskii-Shubert Technique

Authors: Guettal Djaouida, Ziadi Abdelkader

Abstract:

In this paper, we study a coupling of the Alienor method with the algorithm of Piyavskii-Shubert. The classical multidimensional global optimization methods involves great difficulties for their implementation to high dimensions. The Alienor method allows to transform a multivariable function into a function of a single variable for which it is possible to use efficient and rapid method for calculating the the global optimum. This simplification is based on the using of a reducing transformation called Alienor.

Keywords: global optimization, reducing transformation, α-dense curves, Alienor method, Piyavskii-Shubert algorithm

Procedia PDF Downloads 502
19446 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 149
19445 Protein Remote Homology Detection by Using Profile-Based Matrix Transformation Approaches

Authors: Bin Liu

Abstract:

As one of the most important tasks in protein sequence analysis, protein remote homology detection has been studied for decades. Currently, the profile-based methods show state-of-the-art performance. Position-Specific Frequency Matrix (PSFM) is widely used profile. However, there exists noise information in the profiles introduced by the amino acids with low frequencies. In this study, we propose a method to remove the noise information in the PSFM by removing the amino acids with low frequencies called Top frequency profile (TFP). Three new matrix transformation methods, including Autocross covariance (ACC) transformation, Tri-gram, and K-separated bigram (KSB), are performed on these profiles to convert them into fixed length feature vectors. Combined with Support Vector Machines (SVMs), the predictors are constructed. Evaluated on two benchmark datasets, and experimental results show that these proposed methods outperform other state-of-the-art predictors.

Keywords: protein remote homology detection, protein fold recognition, top frequency profile, support vector machines

Procedia PDF Downloads 125
19444 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
19443 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 150
19442 Unfolding the Social Clash between Online and Non-Online Transportation Providers in Bandung

Authors: Latifah Putti Tiananda, Sasti Khoirunnisa, Taniadiana Yapwito, Jessica Noviena

Abstract:

Innovations are often met with two responses, acceptance or rejection. In the past few years, Indonesia is experiencing a revolution of transportation service, which utilizes online platform for its operation. Such improvement is welcomed by consumers and challenged by conventional or ‘non-online’ transportation providers simultaneously. Conflicts arise as the existence of this online transportation mode results in declining income of non-online transportation workers. Physical confrontations and demonstrations demand policing from central authority. However, the obscurity of legal measures from the government persists the social instability. Bandung, a city in West Java with the highest rate of online transportation usage, has recently issued a recommendation withholding the operation of online transportation services to maintain peace and order. Thus, this paper seeks to elaborate the social unrest between the two contesting transportation actors in Bandung and explore community-based approaches to solve this problem. Using qualitative research method, this paper will also feature in-depth interviews with directly involved sources from Bandung.

Keywords: Bandung, market competition, online transportation services, social unrest

Procedia PDF Downloads 274
19441 Formulation of Corrector Methods from 3-Step Hybid Adams Type Methods for the Solution of First Order Ordinary Differential Equation

Authors: Y. A. Yahaya, Ahmad Tijjani Asabe

Abstract:

This paper focuses on the formulation of 3-step hybrid Adams type method for the solution of first order differential equation (ODE). The methods which was derived on both grid and off grid points using multistep collocation schemes and also evaluated at some points to produced Block Adams type method and Adams moulton method respectively. The method with the highest order was selected to serve as the corrector. The convergence was valid and efficient. The numerical experiments were carried out and reveal that hybrid Adams type methods performed better than the conventional Adams moulton method.

Keywords: adam-moulton type (amt), corrector method, off-grid, block method, convergence analysis

Procedia PDF Downloads 626
19440 Estimation of Train Operation Using an Exponential Smoothing Method

Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono

Abstract:

The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.

Keywords: exponential smoothing method, open data, operation estimation, train schedule

Procedia PDF Downloads 388
19439 A Review on Higher-Order Spline Techniques for Solving Burgers Equation Using B-Spline Methods and Variation of B-Spline Techniques

Authors: Maryam Khazaei Pool, Lori Lewis

Abstract:

This is a summary of articles based on higher order B-splines methods and the variation of B-spline methods such as Quadratic B-spline Finite Elements Method, Exponential Cubic B-Spline Method, Septic B-spline Technique, Quintic B-spline Galerkin Method, and B-spline Galerkin Method based on the Quadratic B-spline Galerkin method (QBGM) and Cubic B-spline Galerkin method (CBGM). In this paper, we study the B-spline methods and variations of B-spline techniques to find a numerical solution to the Burgers’ equation. A set of fundamental definitions, including Burgers equation, spline functions, and B-spline functions, are provided. For each method, the main technique is discussed as well as the discretization and stability analysis. A summary of the numerical results is provided, and the efficiency of each method presented is discussed. A general conclusion is provided where we look at a comparison between the computational results of all the presented schemes. We describe the effectiveness and advantages of these methods.

Keywords: Burgers’ equation, Septic B-spline, modified cubic B-spline differential quadrature method, exponential cubic B-spline technique, B-spline Galerkin method, quintic B-spline Galerkin method

Procedia PDF Downloads 126
19438 Mechanical Characterization of Banana by Inverse Analysis Method Combined with Indentation Test

Authors: Juan F. P. Ramírez, Jésica A. L. Isaza, Benjamín A. Rojano

Abstract:

This study proposes a novel use of a method to determine the mechanical properties of fruits by the use of the indentation tests. The method combines experimental results with a numerical finite elements model. The results presented correspond to a simplified numerical modeling of banana. The banana was assumed as one-layer material with an isotropic linear elastic mechanical behavior, the Young’s modulus found is 0.3Mpa. The method will be extended to multilayer models in further studies.

Keywords: finite element method, fruits, inverse analysis, mechanical properties

Procedia PDF Downloads 358
19437 Linear Array Geometry Synthesis with Minimum Sidelobe Level and Null Control Using Taguchi Method

Authors: Amara Prakasa Rao, N. V. S. N. Sarma

Abstract:

This paper describes the synthesis of linear array geometry with minimum sidelobe level and null control using the Taguchi method. Based on the concept of the orthogonal array, Taguchi method effectively reduces the number of tests required in an optimization process. Taguchi method has been successfully applied in many fields such as mechanical, chemical engineering, power electronics, etc. Compared to other evolutionary methods such as genetic algorithms, simulated annealing and particle swarm optimization, the Taguchi method is much easier to understand and implement. It requires less computational/iteration processing to optimize the problem. Different cases are considered to illustrate the performance of this technique. Simulation results show that this method outperforms the other evolution algorithms (like GA, PSO) for smart antenna systems design.

Keywords: array factor, beamforming, null placement, optimization method, orthogonal array, Taguchi method, smart antenna system

Procedia PDF Downloads 394
19436 Residual Power Series Method for System of Volterra Integro-Differential Equations

Authors: Zuhier Altawallbeh

Abstract:

This paper investigates the approximate analytical solutions of general form of Volterra integro-differential equations system by using the residual power series method (for short RPSM). The proposed method produces the solutions in terms of convergent series requires no linearization or small perturbation and reproduces the exact solution when the solution is polynomial. Some examples are given to demonstrate the simplicity and efficiency of the proposed method. Comparisons with the Laplace decomposition algorithm verify that the new method is very effective and convenient for solving system of pantograph equations.

Keywords: integro-differential equation, pantograph equations, system of initial value problems, residual power series method

Procedia PDF Downloads 418
19435 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)

Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim

Abstract:

In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.

Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step

Procedia PDF Downloads 463
19434 Seat Assignment Model for Student Admissions Process at Saudi Higher Education Institutions

Authors: Mohammed Salem Alzahrani

Abstract:

In this paper, student admission process is studied to optimize the assignment of vacant seats with three main objectives. Utilizing all vacant seats, satisfying all program of study admission requirements and maintaining fairness among all candidates are the three main objectives of the optimization model. Seat Assignment Method (SAM) is used to build the model and solve the optimization problem with help of Northwest Coroner Method and Least Cost Method. A closed formula is derived for applying the priority of assigning seat to candidate based on SAM.

Keywords: admission process model, assignment problem, Hungarian Method, Least Cost Method, Northwest Corner Method, SAM

Procedia PDF Downloads 498
19433 Learners and Teachers Experiences in Collaborative Learning

Authors: Bengi Sonyel, Kheder Kasem

Abstract:

Nowadays technology is growing so fast. Everybody agrees that technology should be enhanced more in educational field in order to achieve maximum level of teaching and learning effectiveness. Collaborative learning is one of the most important subjects that have been discussed widely in the last 20 years. In this growing of technology and the widely spread of e-learning systems most of face-to-face processes are changing to be completely online base. Online collaborative learning considered one of the new feature that applied recently in some e-Learning systems but still there are much differences between face-to-face instance of collaborative learning and what really occur and happen in networked online environment.In this research we will compare face-to-face collaborative learning with online collaborative learning to define the key success for achieving course’s outcomes. We will also study the current teachers and students experience in today e-Learning systems, more specifically in online collaborative system and study them interaction to today’s technology that related to education. We will apply quantitative and qualitative research method in order to get accurate results. Finally we will gather all of our findings, analyze it and try to find the advantages and disadvantages as well as the current problems and possible solutions.

Keywords: collaborative learning, learning by doing, technology, teachers, learners experiences

Procedia PDF Downloads 525
19432 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 407
19431 Emotiv EPOC BCI Matrix Speller Based on Single Emokey

Authors: S. M. Abdullah Al Mamun

Abstract:

Human Computer Interaction (HCI) is an excellent area for the researchers to make daily life more simple and fast. Necessary hardware equipments for any BCI are generally expensive and not affordable for most of the people. Emotiv is one of the solutions for this problem, which can provide electroencephalograph (EEG) signal and explain the brain activities. BCI virtual speller was one of the important applications for the people who have lost their hand or speaking ability because of diseases or unexpected accident. In this paper, a matrix speller has been designed for the first time for Bengali speaking people around the world. Bengali is one of the most commonly spoken languages. Among them, a lot of disabled person will be able to express their desire in their mother tongue. This application is also usable for the social networks and daily life communications. For this virtual keyboard, the well-known matrix speller method with column flashing is applied and controlled by single Emokey only. Emokey is a great feature which translates emotional state for application inputs. In this paper, it is presented that the ITR (Information Transfer Rate) were 29.4 bits/min and typing speed achieved up to 7.43 char/per min.

Keywords: brain computer interface, Emotiv EPOC, EEG, virtual keyboard, matrix speller

Procedia PDF Downloads 308
19430 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 158
19429 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario

Authors: J. S. Savier

Abstract:

Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.

Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method

Procedia PDF Downloads 376
19428 Modification of Underwood's Equation to Calculate Minimum Reflux Ratio for Column with One Side Stream Upper Than Feed

Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi

Abstract:

Distillation is one of the most important and utilized separation methods in the industrial practice. There are different ways to design of distillation column. One of these ways is short cut method. In short cut method, material balance and equilibrium are employed to calculate number of tray in distillation column. There are different methods that are classified in short cut method. One of these methods is Fenske-Underwood-Gilliland method. In this method, minimum reflux ratio should be calculated by underwood equation. Underwood proposed an equation that is useful for simple distillation column with one feed and one top and bottom product. In this study, underwood method is developed to predict minimum reflux ratio for column with one side stream upper than feed. The result of this model compared with McCabe-Thiele method. The result shows that proposed method able to calculate minimum reflux ratio with very small error.

Keywords: minimum reflux ratio, side stream, distillation, Underwood’s method

Procedia PDF Downloads 406
19427 Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine

Authors: Elham Serkani, Hossein Gharaee Garakani, Naser Mohammadzadeh, Elaheh Vaezpour

Abstract:

Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.

Keywords: decision tree, feature selection, intrusion detection system, support vector machine

Procedia PDF Downloads 265
19426 Identifying Degradation Patterns of LI-Ion Batteries from Impedance Spectroscopy Using Machine Learning

Authors: Yunwei Zhang, Qiaochu Tang, Yao Zhang, Jiabin Wang, Ulrich Stimming, Alpha Lee

Abstract:

Forecasting the state of health and remaining useful life of Li-ion batteries is an unsolved challenge that limits technologies such as consumer electronics and electric vehicles. Here we build an accurate battery forecasting system by combining electrochemical impedance spectroscopy (EIS) -- a real-time, non-invasive and information-rich measurement that is hitherto underused in battery diagnosis -- with Gaussian process machine learning. We collect over 20,000 EIS spectra of commercial Li-ion batteries at different states of health, states of charge and temperatures -- the largest dataset to our knowledge of its kind. Our Gaussian process model takes the entire spectrum as input, without further feature engineering, and automatically determines which spectral features predict degradation. Our model accurately predicts the remaining useful life, even without complete knowledge of past operating conditions of the battery. Our results demonstrate the value of EIS signals in battery management systems.

Keywords: battery degradation, machine learning method, electrochemical impedance spectroscopy, battery diagnosis

Procedia PDF Downloads 148
19425 A Calibration Device for Force-Torque Sensors

Authors: Nicolay Zarutskiy, Roman Bulkin

Abstract:

The paper deals with the existing methods of force-torque sensor calibration with a number of components from one to six, analyzed their advantages and disadvantages, the necessity of introduction of a calibration method. Calibration method and its constructive realization are also described here. A calibration method allows performing automated force-torque sensor calibration both with selected components of the main vector of forces and moments and with complex loading. Thus, two main advantages of the proposed calibration method are achieved: the automation of the calibration process and universality.

Keywords: automation, calibration, calibration device, calibration method, force-torque sensors

Procedia PDF Downloads 646
19424 Relevance of Lecture Method in Modern Era: A Study from Nepal

Authors: Hari Prasad Nepal

Abstract:

Research on lecture method issues confirm that this teaching method has been practiced from the very beginnings of schooling. Many teachers, lecturers and professors are convinced that lecture still represents main tool of contemporary instructional process. The central purpose of this study is to uncover the extent of using lecture method in the higher education. The study was carried out in Nepalese context with employing mixed method research design. To obtain the primary data this study employed a questionnaire involving items with close and open answers. 120 teachers, lecturers and professors participated in this study. The findings indicated that 75 percent of the respondents use the lecture method in their classroom teaching. The study reveals that there are advantages of using lecture method such as easy to practice, less time to prepare, high pass rate, high students’ satisfaction, little comments on instructors, appropriate to large classes and high level students. In addition, the study divulged the instructors’ reflections and measures to improve the lecture method. This research concludes that the practice of lecture method is still significantly applicable in colleges and universities in Nepalese contexts. So, there are no significant changes in the application of lecture method in the higher education classroom despite the emergence of new learning approaches and strategies.

Keywords: instructors, learning approaches, learning strategies, lecture method

Procedia PDF Downloads 238
19423 Vfx-Creativity or Cost Cutting Study of the Use of Vfx in Hindi Cinema

Authors: Nidhi Patel, Amol Shinde, Amrin Moger

Abstract:

Mainstream Hindi cinema also known as Bollywood, is the largest film producing industry in India. The Indian film industry underwent a sea change since last few years. The industry adapted to the latest technologies and creative manpower to improve visual and cinematic effects. The changes helped the industry to improve its creative looks and ease on production budget. The research focuses on this very change, i.e. the use of VFX. There has been growing use of VFX in feature films. The primary focus is on how VFX can make a difference in the experience of watching a movie. The research examines the use of CGI/VFX in the narrative, which delivers a visually fulfilling film. It also focuses on the use of CGI/ VFX as a cost cutting tool. The research was exploratory in nature. It studies the industry’s evolvement, increment in its use by filmmakers and their intention to use it in their films. The researcher used qualitative method for data collection as an in-depth interview of 10 artists from VFX studios in Mumbai was conducted. The finding reveals the way VFX is used in Hindi cinema by the directors. The researcher learnt that VFX is majorly used as a tool to enhance creativity and provide the audience with creative viewing experience.

Keywords: Bollywood, Hindi cinema, VFX, CGI, technology, creativity, cost cutting

Procedia PDF Downloads 359