Search results for: neural networking algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5209

Search results for: neural networking algorithm

1819 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 108
1818 A Clinician’s Perspective on Electroencephalography Annotation and Analysis for Driver Drowsiness Estimation

Authors: Ruxandra Aursulesei, David O’Callaghan, Cian Ryan, Diarmaid O’Cualain, Viktor Varkarakis, Alina Sultana, Joseph Lemley

Abstract:

Human errors caused by drowsiness are among the leading causes of road accidents. Neurobiological research gives information about the electrical signals emitted by neurons firing within the brain. Electrical signal frequencies can be determined by attaching bio-sensors to the head surface. By observing the electrical impulses and the rhythmic interaction of neurons with each other, we can predict the mental state of a person. In this paper, we aim to better understand intersubject and intrasubject variability in terms of electrophysiological patterns that occur at the onset of drowsiness and their evolution with the decreasing of vigilance. The purpose is to lay the foundations for an algorithm that detects the onset of drowsiness before the physical signs become apparent.

Keywords: electroencephalography, drowsiness, ADAS, annotations, clinician

Procedia PDF Downloads 100
1817 Residual Life Prediction for a System Subject to Condition Monitoring and Two Failure Modes

Authors: Akram Khaleghei, Ghosheh Balagh, Viliam Makis

Abstract:

In this paper, we investigate the residual life prediction problem for a partially observable system subject to two failure modes, namely a catastrophic failure and a failure due to the system degradation. The system is subject to condition monitoring and the degradation process is described by a hidden Markov model with unknown parameters. The parameter estimation procedure based on an EM algorithm is developed and the formulas for the conditional reliability function and the mean residual life are derived, illustrated by a numerical example.

Keywords: partially observable system, hidden Markov model, competing risks, residual life prediction

Procedia PDF Downloads 406
1816 Bayesian Analysis of Change Point Problems Using Conditionally Specified Priors

Authors: Golnaz Shahtahmassebi, Jose Maria Sarabia

Abstract:

In this talk, we introduce a new class of conjugate prior distributions obtained from conditional specification methodology. We illustrate the application of such distribution in Bayesian change point detection in Poisson processes. We obtain the posterior distribution of model parameters using a general bivariate distribution with gamma conditionals. Simulation from the posterior is readily implemented using a Gibbs sampling algorithm. The Gibbs sampling is implemented even when using conditional densities that are incompatible or only compatible with an improper joint density. The application of such methods will be demonstrated using examples of simulated and real data.

Keywords: change point, bayesian inference, Gibbs sampler, conditional specification, gamma conditional distributions

Procedia PDF Downloads 182
1815 Artificial Intelligence Methods for Returns Expectations in Financial Markets

Authors: Yosra Mefteh Rekik, Younes Boujelbene

Abstract:

We introduce in this paper a new conceptual model representing the stock market dynamics. This model is essentially based on cognitive behavior of the intelligence investors. In order to validate our model, we build an artificial stock market simulation based on agent-oriented methodologies. The proposed simulator is composed of market supervisor agent essentially responsible for executing transactions via an order book and various kinds of investor agents depending to their profile. The purpose of this simulation is to understand the influence of psychological character of an investor and its neighborhood on its decision-making and their impact on the market in terms of price fluctuations. Therefore, the difficulty of the prediction is due to several features: the complexity, the non-linearity and the dynamism of the financial market system, as well as the investor psychology. The Artificial Neural Networks learning mechanism take on the role of traders, who from their futures return expectations and place orders based on their expectations. The results of intensive analysis indicate that the existence of agents having heterogeneous beliefs and preferences has provided a better understanding of price dynamics in the financial market.

Keywords: artificial intelligence methods, artificial stock market, behavioral modeling, multi-agent based simulation

Procedia PDF Downloads 435
1814 Cellulose Acetate Nanofiber Modification for Regulating Astrocyte Activity via Simple Heat Treatment

Authors: Sang-Myung Jung, Jeong Hyun Ju, Gwang Heum Yoon, Hwa Sung Shin

Abstract:

Central nervous system (CNS) consists of neuronal cell and supporting cells. Astrocytes are the most common supporting cells and play roles in metabolism between neurons and blood vessel. For this function, engineered astrocytes have been studied as a therapeutic source for CNS injury. In neural tissue engineering, nanofiber has been suggested as an effective scaffold for providing structure and mechanical properties influencing physiology. Cellulose acetate (CA) has been investigated for material to fabricate scaffold because of its biocompatibility, biodegradability and fine thermal stability. In this research, CA nanofiber was modified via heat treatment and its effect on astrocyte activity was evaluated. Adhesion and viability of astrocyte were increased in proportion to stiffness. Additionally, expression of GFAP, a marker of astrocyte activation, was increased via stiffness of scaffold. This research suggests a simple modification method to change stiffness of CA nanofiber and shows cellular behavior affecting stiffness of three-dimensional scaffold independently. For the results, we highlight that the stiffness is a factor to regulate astrocyte activity.

Keywords: astrocyte, cellulose acetate, cell therapy, stiffness of scaffold

Procedia PDF Downloads 470
1813 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 132
1812 Pareto Optimal Material Allocation Mechanism

Authors: Peter Egri, Tamas Kis

Abstract:

Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.

Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling

Procedia PDF Downloads 323
1811 Contextual Sentiment Analysis with Untrained Annotators

Authors: Lucas A. Silva, Carla R. Aguiar

Abstract:

This work presents a proposal to perform contextual sentiment analysis using a supervised learning algorithm and disregarding the extensive training of annotators. To achieve this goal, a web platform was developed to perform the entire procedure outlined in this paper. The main contribution of the pipeline described in this article is to simplify and automate the annotation process through a system of analysis of congruence between the notes. This ensured satisfactory results even without using specialized annotators in the context of the research, avoiding the generation of biased training data for the classifiers. For this, a case study was conducted in a blog of entrepreneurship. The experimental results were consistent with the literature related annotation using formalized process with experts.

Keywords: sentiment analysis, untrained annotators, naive bayes, entrepreneurship, contextualized classifier

Procedia PDF Downloads 383
1810 Credit Risk Prediction Based on Bayesian Estimation of Logistic Regression Model with Random Effects

Authors: Sami Mestiri, Abdeljelil Farhat

Abstract:

The aim of this current paper is to predict the credit risk of banks in Tunisia, over the period (2000-2005). For this purpose, two methods for the estimation of the logistic regression model with random effects: Penalized Quasi Likelihood (PQL) method and Gibbs Sampler algorithm are applied. By using the information on a sample of 528 Tunisian firms and 26 financial ratios, we show that Bayesian approach improves the quality of model predictions in terms of good classification as well as by the ROC curve result.

Keywords: forecasting, credit risk, Penalized Quasi Likelihood, Gibbs Sampler, logistic regression with random effects, curve ROC

Procedia PDF Downloads 533
1809 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line

Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez

Abstract:

Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.

Keywords: deep-learning, image classification, image identification, industrial engineering.

Procedia PDF Downloads 150
1808 Development of Modular Shortest Path Navigation System

Authors: Nalinee Sophatsathit

Abstract:

This paper presents a variation of navigation systems which tallies every node along the shortest path from start to destination nodes. The underlying technique rests on the well-established Dijkstra Algorithm. The ultimate goal is to serve as a user navigation guide that furnishes stop over cost of every node along this shortest path, whereby users can decide whether or not to visit any specific nodes. The output is an implementable module that can be further refined to run on the Internet and smartphone technology. This will benefit large organizations having physical installations spreaded over wide area such as hospitals, universities, etc. The savings on service personnel, let alone lost time and unproductive work, are attributive to innovative navigation system management.

Keywords: navigation systems, shortest path, smartphone technology, user navigation guide

Procedia PDF Downloads 323
1807 Interactive, Topic-Oriented Search Support by a Centroid-Based Text Categorisation

Authors: Mario Kubek, Herwig Unger

Abstract:

Centroid terms are single words that semantically and topically characterise text documents and so may serve as their very compact representation in automatic text processing. In the present paper, centroids are used to measure the relevance of text documents with respect to a given search query. Thus, a new graphbased paradigm for searching texts in large corpora is proposed and evaluated against keyword-based methods. The first, promising experimental results demonstrate the usefulness of the centroid-based search procedure. It is shown that especially the routing of search queries in interactive and decentralised search systems can be greatly improved by applying this approach. A detailed discussion on further fields of its application completes this contribution.

Keywords: search algorithm, centroid, query, keyword, co-occurrence, categorisation

Procedia PDF Downloads 274
1806 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 213
1805 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach

Authors: Razaq A. Olaitan, Ayansina Ayanlade

Abstract:

Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.

Keywords: aerosol loading, aerosol types, health risks, optical properties

Procedia PDF Downloads 47
1804 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 277
1803 Reliability Based Topology Optimization: An Efficient Method for Material Uncertainty

Authors: Mehdi Jalalpour, Mazdak Tootkaboni

Abstract:

We present a computationally efficient method for reliability-based topology optimization under material properties uncertainty, which is assumed to be lognormally distributed and correlated within the domain. Computational efficiency is achieved through estimating the response statistics with stochastic perturbation of second order, using these statistics to fit an appropriate distribution that follows the empirical distribution of the response, and employing an efficient gradient-based optimizer. The proposed algorithm is utilized for design of new structures and the changes in the optimized topology is discussed for various levels of target reliability and correlation strength. Predictions were verified thorough comparison with results obtained using Monte Carlo simulation.

Keywords: material uncertainty, stochastic perturbation, structural reliability, topology optimization

Procedia PDF Downloads 597
1802 A Fast and Robust Protocol for Reconstruction and Re-Enactment of Historical Sites

Authors: Sanaa I. Abu Alasal, Madleen M. Esbeih, Eman R. Fayyad, Rami S. Gharaibeh, Mostafa Z. Ali, Ahmed A. Freewan, Monther M. Jamhawi

Abstract:

This research proposes a novel reconstruction protocol for restoring missing surfaces and low-quality edges and shapes in photos of artifacts at historical sites. The protocol starts with the extraction of a cloud of points. This extraction process is based on four subordinate algorithms, which differ in the robustness and amount of resultant. Moreover, they use different -but complementary- accuracy to some related features and to the way they build a quality mesh. The performance of our proposed protocol is compared with other state-of-the-art algorithms and toolkits. The statistical analysis shows that our algorithm significantly outperforms its rivals in the resultant quality of its object files used to reconstruct the desired model.

Keywords: meshes, point clouds, surface reconstruction protocols, 3D reconstruction

Procedia PDF Downloads 445
1801 Hydro-Gravimetric Ann Model for Prediction of Groundwater Level

Authors: Jayanta Kumar Ghosh, Swastik Sunil Goriwale, Himangshu Sarkar

Abstract:

Groundwater is one of the most valuable natural resources that society consumes for its domestic, industrial, and agricultural water supply. Its bulk and indiscriminate consumption affects the groundwater resource. Often, it has been found that the groundwater recharge rate is much lower than its demand. Thus, to maintain water and food security, it is necessary to monitor and management of groundwater storage. However, it is challenging to estimate groundwater storage (GWS) by making use of existing hydrological models. To overcome the difficulties, machine learning (ML) models are being introduced for the evaluation of groundwater level (GWL). Thus, the objective of this research work is to develop an ML-based model for the prediction of GWL. This objective has been realized through the development of an artificial neural network (ANN) model based on hydro-gravimetry. The model has been developed using training samples from field observations spread over 8 months. The developed model has been tested for the prediction of GWL in an observation well. The root means square error (RMSE) for the test samples has been found to be 0.390 meters. Thus, it can be concluded that the hydro-gravimetric-based ANN model can be used for the prediction of GWL. However, to improve the accuracy, more hydro-gravimetric parameter/s may be considered and tested in future.

Keywords: machine learning, hydro-gravimetry, ground water level, predictive model

Procedia PDF Downloads 115
1800 Creating and Questioning Research-Oriented Digital Outputs to Manuscript Metadata: A Case-Based Methodological Investigation

Authors: Diandra Cristache

Abstract:

The transition of traditional manuscript studies into the digital framework closely affects the methodological premises upon which manuscript descriptions are modeled, created, and questioned for the purpose of research. This paper intends to explore the issue by presenting a methodological investigation into the process of modeling, creating, and questioning manuscript metadata. The investigation is founded on a close observation of the Polonsky Greek Manuscripts Project, a collaboration between the Universities of Cambridge and Heidelberg. More than just providing a realistic ground for methodological exploration, along with a complete metadata set for computational demonstration, the case study also contributes to a broader purpose: outlining general methodological principles for making the most out of manuscript metadata by means of research-oriented digital outputs. The analysis mainly focuses on the scholarly approach to manuscript descriptions, in the specific instance where the act of metadata recording does not have a programmatic research purpose. Close attention is paid to the encounter of 'traditional' practices in manuscript studies with the formal constraints of the digital framework: does the shift in practices (especially from the straight narrative of free writing towards the hierarchical constraints of the TEI encoding model) impact the structure of metadata and its capability to respond specific research questions? It is argued that flexible structure of TEI and traditional approaches to manuscript description lead to a proliferation of markup: does an 'encyclopedic' descriptive approach ensure the epistemological relevance of the digital outputs to metadata? To provide further insight on the computational approach to manuscript metadata, the metadata of the Polonsky project are processed with techniques of distant reading and data networking, thus resulting in a new group of digital outputs (relational graphs, geographic maps). The computational process and the digital outputs are thoroughly illustrated and discussed. Eventually, a retrospective analysis evaluates how the digital outputs respond to the scientific expectations of research, and the other way round, how the requirements of research questions feed back into the creation and enrichment of metadata in an iterative loop.

Keywords: digital manuscript studies, digital outputs to manuscripts metadata, metadata interoperability, methodological issues

Procedia PDF Downloads 134
1799 Cognitive SATP for Airborne Radar Based on Slow-Time Coding

Authors: Fanqiang Kong, Jindong Zhang, Daiyin Zhu

Abstract:

Space-time adaptive processing (STAP) techniques have been motivated as a key enabling technology for advanced airborne radar applications. In this paper, the notion of cognitive radar is extended to STAP technique, and cognitive STAP is discussed. The principle for improving signal-to-clutter ratio (SCNR) based on slow-time coding is given, and the corresponding optimization algorithm based on cyclic and power-like algorithms is presented. Numerical examples show the effectiveness of the proposed method.

Keywords: space-time adaptive processing (STAP), airborne radar, signal-to-clutter ratio, slow-time coding

Procedia PDF Downloads 263
1798 An Accurate Brain Tumor Segmentation for High Graded Glioma Using Deep Learning

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

Gliomas are most challenging and aggressive type of tumors which appear in different sizes, locations, and scattered boundaries. CNN is most efficient deep learning approach with outstanding capability of solving image analysis problems. A fully automatic deep learning based 2D-CNN model for brain tumor segmentation is presented in this paper. We used small convolution filters (3 x 3) to make architecture deeper. We increased convolutional layers for efficient learning of complex features from large dataset. We achieved better results by pushing convolutional layers up to 16 layers for HGG model. We achieved reliable and accurate results through fine-tuning among dataset and hyper-parameters. Pre-processing of this model includes generation of brain pipeline, intensity normalization, bias correction and data augmentation. We used the BRATS-2015, and Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.81 for complete, 0.79 for core, 0.80 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, HGG

Procedia PDF Downloads 239
1797 Optimal Capacitor Placement in Distribution Systems

Authors: Sana Ansari, Sirus Mohammadi

Abstract:

In distribution systems, shunt capacitors are used to reduce power losses, to improve voltage profile, and to increase the maximum flow through cables and transformers. This paper presents a new method to determine the optimal locations and economical sizing of fixed and/or switched shunt capacitors with a view to power losses reduction and voltage stability enhancement. General Algebraic Modeling System (GAMS) has been used to solve the maximization modules using the MINOS optimization software with Linear Programming (LP). The proposed method is tested on 33 node distribution system and the results show that the algorithm suitable for practical implementation on real systems with any size.

Keywords: power losses, voltage stability, radial distribution systems, capacitor

Procedia PDF Downloads 638
1796 Video Stabilization Using Feature Point Matching

Authors: Shamsundar Kulkarni

Abstract:

Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

Keywords: video stabilization, point feature matching, salient points, image quality measurement

Procedia PDF Downloads 299
1795 The Convection Heater Numerical Simulation

Authors: Cristian Patrascioiu, Loredana Negoita

Abstract:

This paper is focused on modeling and simulation of the tubular heaters. The paper is structured in four parts: the structure of the tubular convection section, the heat transfer model, the adaptation of the mathematical model and the solving model. The main hypothesis of the heat transfer modeling is that the heat exchanger of the convective tubular heater is a lumped system. In the same time, the model uses the heat balance relations, Newton’s law and criteria relations. The numerical program achieved allows for the estimation of the burn gases outlet temperature and the heated flow outlet temperature.

Keywords: heat exchanger, mathematical modelling, nonlinear equation system, Newton-Raphson algorithm

Procedia PDF Downloads 285
1794 Implementation of a Low-Cost Driver Drowsiness Evaluation System Using a Thermal Camera

Authors: Isa Moazen, Ali Nahvi

Abstract:

Driver drowsiness is a major cause of vehicle accidents, and facial images are highly valuable to detect drowsiness. In this paper, we perform our research via a thermal camera to record drivers' facial images on a driving simulator. A robust real-time algorithm extracts the features using horizontal and vertical integration projection, contours, contour orientations, and cropping tools. The features are included four target areas on the cheeks and forehead. Qt compiler and OpenCV are used with two cameras with different resolutions. A high-resolution thermal camera is used for fifteen subjects, and a low-resolution one is used for a person. The results are investigated by four temperature plots and evaluated by observer rating of drowsiness.

Keywords: advanced driver assistance systems, thermal imaging, driver drowsiness detection, feature extraction

Procedia PDF Downloads 129
1793 [Keynote Talk]: Analysis of Intelligent Based Fault Tolerant Capability System for Solar Photovoltaic Energy Conversion

Authors: Albert Alexander Stonier

Abstract:

Due to the fossil fuel exhaustion and environmental pollution, renewable energy sources especially solar photovoltaic system plays a predominant role in providing energy to the consumers. It has been estimated that by 2050 the renewable energy sources will satisfy 50% of the total energy requirement of the world. In this context, the faults in the conversion process require a special attention which is considered as a major problem. A fault which remains even for a few seconds will cause undesirable effects to the system. The presentation comprises of the analysis, causes, effects and mitigation methods of various faults occurring in the entire solar photovoltaic energy conversion process. In order to overcome the faults in the system, an intelligent based artificial neural networks and fuzzy logic are proposed which can significantly mitigate the faults. Hence the presentation intends to find the problem in renewable energy and provides the possible solution to overcome it with simulation and experimental results. The work performed in a 3kWp solar photovoltaic plant whose results cites the improvement in reliability, availability, power quality and fault tolerant ability.

Keywords: solar photovoltaic, power electronics, power quality, PWM

Procedia PDF Downloads 268
1792 Finite Element Analysis of a Dynamic Linear Crack Problem

Authors: Brian E. Usibe

Abstract:

This paper addresses the problem of a linear crack located in the middle of a homogeneous elastic media under normal tension-compression harmonic loading. The problem of deformation of the fractured media is solved using the direct finite element numerical procedure, including the analysis of the dynamic field variables of the problem. A finite element algorithm that satisfies the unilateral Signorini contact constraint is also presented for the solution of the contact interaction of the crack faces and how this accounts for the qualitative and quantitative changes in the solution when determining the dynamic fracture parameter.

Keywords: harmonic loading, linear crack, fracture parameter, wave number, FEA, contact interaction

Procedia PDF Downloads 29
1791 Residual Power Series Method for System of Volterra Integro-Differential Equations

Authors: Zuhier Altawallbeh

Abstract:

This paper investigates the approximate analytical solutions of general form of Volterra integro-differential equations system by using the residual power series method (for short RPSM). The proposed method produces the solutions in terms of convergent series requires no linearization or small perturbation and reproduces the exact solution when the solution is polynomial. Some examples are given to demonstrate the simplicity and efficiency of the proposed method. Comparisons with the Laplace decomposition algorithm verify that the new method is very effective and convenient for solving system of pantograph equations.

Keywords: integro-differential equation, pantograph equations, system of initial value problems, residual power series method

Procedia PDF Downloads 414
1790 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)

Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim

Abstract:

In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.

Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step

Procedia PDF Downloads 455