Search results for: predictive accuracy
2488 Adapted Intersection over Union: A Generalized Metric for Evaluating Unsupervised Classification Models
Authors: Prajwal Prakash Vasisht, Sharath Rajamurthy, Nishanth Dara
Abstract:
In a supervised machine learning approach, metrics such as precision, accuracy, and coverage can be calculated using ground truth labels to help in model tuning, evaluation, and selection. In an unsupervised setting, however, where the data has no ground truth, there are few interpretable metrics that can guide us to do the same. Our approach creates a framework to adapt the Intersection over Union metric, referred to as Adapted IoU, usually used to evaluate supervised learning models, into the unsupervised domain, which solves the problem by factoring in subject matter expertise and intuition about the ideal output from the model. This metric essentially provides a scale that allows us to compare the performance across numerous unsupervised models or tune hyper-parameters and compare different versions of the same model.Keywords: general metric, unsupervised learning, classification, intersection over union
Procedia PDF Downloads 472487 Combined Odd Pair Autoregressive Coefficients for Epileptic EEG Signals Classification by Radial Basis Function Neural Network
Authors: Boukari Nassim
Abstract:
This paper describes the use of odd pair autoregressive coefficients (Yule _Walker and Burg) for the feature extraction of electroencephalogram (EEG) signals. In the classification: the radial basis function neural network neural network (RBFNN) is employed. The RBFNN is described by his architecture and his characteristics: as the RBF is defined by the spread which is modified for improving the results of the classification. Five types of EEG signals are defined for this work: Set A, Set B for normal signals, Set C, Set D for interictal signals, set E for ictal signal (we can found that in Bonn university). In outputs, two classes are given (AC, AD, AE, BC, BD, BE, CE, DE), the best accuracy is calculated at 99% for the combined odd pair autoregressive coefficients. Our method is very effective for the diagnosis of epileptic EEG signals.Keywords: epilepsy, EEG signals classification, combined odd pair autoregressive coefficients, radial basis function neural network
Procedia PDF Downloads 3462486 Photoplethysmography-Based Device Designing for Cardiovascular System Diagnostics
Authors: S. Botman, D. Borchevkin, V. Petrov, E. Bogdanov, M. Patrushev, N. Shusharina
Abstract:
In this paper, we report the development of the device for diagnostics of cardiovascular system state and associated automated workstation for large-scale medical measurement data collection and analysis. It was shown that optimal design for the monitoring device is wristband as it represents engineering trade-off between accuracy and usability. The monitoring device is based on the infrared reflective photoplethysmographic sensor, which allows collecting multiple physiological parameters, such as heart rate and pulsing wave characteristics. Developed device use BLE interface for medical and supplementary data transmission to the coupled mobile phone, which process it and send it to the doctor's automated workstation. Results of this experimental model approbation confirmed the applicability of the proposed approach.Keywords: cardiovascular diseases, health monitoring systems, photoplethysmography, pulse wave, remote diagnostics
Procedia PDF Downloads 4922485 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 642484 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 1932483 Geosynthetic Tubes in Coastal Structures a Better Substitute for Shorter Planning Horizon: A Case Study
Authors: A. Pietro Rimoldi, B. Anilkumar Gopinath, C. Minimol Korulla
Abstract:
Coastal engineering structure is conventionally designed for a shorter planning horizon usually 20 years. These structures are subjected to different offshore climatic externalities like waves, tides, tsunamis etc. during the design life period. The probability of occurrence of these different offshore climatic externalities varies. The impact frequently caused by these externalities on the structures is of concern because it has a significant bearing on the capital /operating cost of the project. There can also be repeated short time occurrence of these externalities in the assumed planning horizon which can cause heavy damage to the conventional coastal structure which are mainly made of rock. A replacement of the damaged portion to prevent complete collapse is time consuming and expensive when dealing with hard rock structures. But if coastal structures are made of Geo-synthetic containment systems such replacement is quickly possible in the time period between two successive occurrences. In order to have a better knowledge and to enhance the predictive capacity of these occurrences, this study estimates risk of encounter within the design life period of various externalities based on the concept of exponential distribution. This gives an idea of the frequency of occurrences which in turn gives an indication of whether replacement is necessary and if so at what time interval such replacements have to be effected. To validate this theoretical finding, a pilot project has been taken up in the field so that the impact of the externalities can be studied both for a hard rock and a Geosynthetic tube structure. The paper brings out the salient feature of a case study which pertains to a project in which Geosynthetic tubes have been used for reformation of a seawall adjacent to a conventional rock structure in Alappuzha coast, Kerala, India. The effectiveness of the Geosystem in combatting the impact of the short-term externalities has been brought out.Keywords: climatic externalities, exponential distribution, geosystems, planning horizon
Procedia PDF Downloads 2272482 Learning to Recommend with Negative Ratings Based on Factorization Machine
Authors: Caihong Sun, Xizi Zhang
Abstract:
Rating prediction is an important problem for recommender systems. The task is to predict the rating for an item that a user would give. Most of the existing algorithms for the task ignore the effect of negative ratings rated by users on items, but the negative ratings have a significant impact on users’ purchasing decisions in practice. In this paper, we present a rating prediction algorithm based on factorization machines that consider the effect of negative ratings inspired by Loss Aversion theory. The aim of this paper is to develop a concave and a convex negative disgust function to evaluate the negative ratings respectively. Experiments are conducted on MovieLens dataset. The experimental results demonstrate the effectiveness of the proposed methods by comparing with other four the state-of-the-art approaches. The negative ratings showed much importance in the accuracy of ratings predictions.Keywords: factorization machines, feature engineering, negative ratings, recommendation systems
Procedia PDF Downloads 2422481 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates
Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc
Abstract:
Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS
Procedia PDF Downloads 3572480 Design and Analysis of an Electro Thermally Symmetrical Actuated Microgripper
Authors: Sh. Foroughi, V. Karamzadeh, M. Packirisamy
Abstract:
This paper presents design and analysis of an electrothermally symmetrical actuated microgripper applicable for performing micro assembly or biological cell manipulation. Integration of micro-optics with microdevice leads to achieve extremely precise control over the operation of the device. Geometry, material, actuation, control, accuracy in measurement and temperature distribution are important factors which have to be taken into account for designing the efficient microgripper device. In this work, analyses of four different geometries are performed by means of COMSOL Multiphysics 5.2 with implementing Finite Element Methods. Then, temperature distribution along the fingertip, displacement of gripper site as well as optical efficiency vs. displacement and electrical potential are illustrated. Results show in addition to the industrial application of this device, the usage of that as a cell manipulator is possible.Keywords: electro thermal actuator, MEMS, microgripper, MOEMS
Procedia PDF Downloads 1652479 Parametric Template-Based 3D Reconstruction of the Human Body
Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo, Linhang Zhu
Abstract:
This study proposed a 3D human body reconstruction method, which integrates multi-view joint information into a set of joints and processes it with a parametric human body template. Firstly, we obtained human body image information captured from multiple perspectives. The multi-view information can avoid self-occlusion and occlusion problems during the reconstruction process. Then, we used the MvP algorithm to integrate multi-view joint information into a set of joints. Next, we used the parametric human body template SMPL-X to obtain more accurate three-dimensional human body reconstruction results. Compared with the traditional single-view parametric human body template reconstruction, this method significantly improved the accuracy and stability of the reconstruction.Keywords: parametric human body templates, reconstruction of the human body, multi-view, joint
Procedia PDF Downloads 792478 A Profile of an Exercise Addict: The Relationship between Exercise Addiction and Personality
Authors: Klary Geisler, Dalit Lev-Arey, Yael Hacohen
Abstract:
It is a well-known fact that exercise has favorable effects on people's physical health, as well as mental well-being. However, as for as excessive exercise, it may likely elevate negative consequences (e.g., physical injuries, negligence of everyday responsibilities such as work, family life). Lately, there is a growing interest in exercise addiction, sometimes referred to as exercise dependence, which is defined as a craving for physical activity that results in extreme work-out sessions and generates negative physiological and psychological symptoms (e.g., withdrawal symptoms, tolerance, social conflict). Exercise addiction is considered a behavioral addiction, yet it was not included in the latest editions of the diagnostic and statistical manual of mental disorders (DSM-IV), due to lack of significant research. Specifically, there is scarce research on the relationship between exercise addiction and personality dimensions. The purpose of the current research was to examine the relationship between primary exercise addiction symptoms and the big five dimensions, perfectionism (high performance expectations and self-critical performance evaluations) and subjective affect. participants were 152 trainees on a variety of aerobic sports activities (running, cycling, swimming) that were recruited through sports groups and trainers. 88% of participants trained for at least 5 hours per week, 24% of the participants trained above 10 hours per week. To test the predictive ability of the IVs a hierarchical linear regression with forced block entry was performed. It was found that Neuroticism significantly predicted exercise addiction symptoms (20% of the variance, p<0.001), while consciousness was negatively correlated with exercise addiction symptoms (14% of variance p<0.05); both had a unique contribution. Other dimensions of the big five (agreeableness, openness and extraversion) did not have any contribution to the dependent. Moreover, maladaptive perfectionism (self-critical performance evaluations) significantly predicted exercise addiction symptoms as well (10% of the variance P < 0.05). The overall regression model explained 54% of variance.Keywords: big five, consciousness, excessive exercise, exercise addiction, neuroticism, perfectionism, personality
Procedia PDF Downloads 2292477 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine
Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta
Abstract:
The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient
Procedia PDF Downloads 1052476 The Predictive Implication of Executive Function and Language in Theory of Mind Development in Preschool Age Children
Authors: Michael Luc Andre, Célia Maintenant
Abstract:
Theory of mind is a milestone in child development which allows children to understand that others could have different mental states than theirs. Understanding the developmental stages of theory of mind in children leaded researchers on two Connected research problems. In one hand, the link between executive function and theory of mind, and on the other hand, the relationship of theory of mind and syntax processing. These two lines of research involved a great literature, full of important results, despite certain level of disagreement between researchers. For a long time, these two research perspectives continue to grow up separately despite research conclusion suggesting that the three variables should implicate same developmental period. Indeed, our goal was to study the relation between theory of mind, executive function, and language via a unique research question. It supposed that between executive function and language, one of the two variables could play a critical role in the relationship between theory of mind and the other variable. Thus, 112 children aged between three and six years old were recruited for completing a receptive and an expressive vocabulary task, a syntax understanding task, a theory of mind task, and three executive function tasks (inhibition, cognitive flexibility and working memory). The results showed significant correlations between performance on theory of mind task and performance on executive function domain tasks, except for cognitive flexibility task. We also found significant correlations between success on theory of mind task and performance in all language tasks. Multiple regression analysis justified only syntax and general abilities of language as possible predictors of theory of mind performance in our preschool age children sample. The results were discussed in the perspective of a great role of language abilities in theory of mind development. We also discussed possible reasons that could explain the non-significance of executive domains in predicting theory of mind performance, and the meaning of our results for the literature.Keywords: child development, executive function, general language, syntax, theory of mind
Procedia PDF Downloads 642475 Context-Aware Recommender System Using Collaborative Filtering, Content-Based Algorithm and Fuzzy Rules
Authors: Xochilt Ramirez-Garcia, Mario Garcia-Valdez
Abstract:
Contextual recommendations are implemented in Recommender Systems to improve user satisfaction, recommender system makes accurate and suitable recommendations for a particular situation reaching personalized recommendations. The context provides information relevant to the Recommender System and is used as a filter for selection of relevant items for the user. This paper presents a Context-aware Recommender System, which uses techniques based on Collaborative Filtering and Content-Based, as well as fuzzy rules, to recommend items inside the context. The dataset used to test the system is Trip Advisor. The accuracy in the recommendations was evaluated with the Mean Absolute Error.Keywords: algorithms, collaborative filtering, intelligent systems, fuzzy logic, recommender systems
Procedia PDF Downloads 4222474 Forecasting Amman Stock Market Data Using a Hybrid Method
Authors: Ahmad Awajan, Sadam Al Wadi
Abstract:
In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series
Procedia PDF Downloads 1292473 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 972472 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory
Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov
Abstract:
The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.Keywords: analytical regularization method, electromagnetic theory evolutionary equations of time-domain, TM Field
Procedia PDF Downloads 5002471 Supervised Learning for Cyber Threat Intelligence
Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk
Abstract:
The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.Keywords: threat information sharing, supervised learning, data classification, performance evaluation
Procedia PDF Downloads 1482470 A Dislocation-Based Explanation to Quasi-Elastic Release in Shock Loaded Aluminum
Authors: Song L. Yao, Ji D. Yu, Xiao Y. Pei
Abstract:
An explanation is introduced to study the quasi-elastic release phenomenon in shock compressed aluminum. A dislocation-based model, taking into account of dislocation substructures and evolutions, is applied to simulate the elastic-plastic response of both single crystal and polycrystalline aluminum. Simulated results indicate that dislocation immobilization during dynamic deformation results in a smooth increase of yield stress, which leads to the quasi-elastic release. While the generation of dislocations caused by plastic release wave results in the appearance of transition point between the quasi-elastic release and the plastic release in the profile. The quantities of calculated shear strength and dislocation density are in accordance with experimental result, which demonstrates the accuracy of our simulations.Keywords: dislocation density, quasi-elastic release, wave profile, shock wave
Procedia PDF Downloads 2822469 Developing Index of Democratic Institutions' Vulnerability
Authors: Kamil Jonski
Abstract:
Last year vividly demonstrated, that populism and political instability can endanger democratic institutions in countries regarded as democratic transition champions (Poland) or cornerstones of liberal order (UK, US). So called ‘illiberal democracy’ is winning hearts and minds of voters, keen to believe that rule of strongman is a viable alternative to perceived decay of western values and institutions. These developments pose a serious threat to the democratic institutions (including rule of law), proven critical for both personal freedom and economic development. Although scholars proposed some structural explanations of the illiberal wave (notably focusing on inequality, stagnant incomes and drawbacks of globalization), they seem to have little predictive value. Indeed, events like Trump’s victory, Brexit or Polish shift towards populist nationalism always came as a surprise. Intriguingly, in the case of US election, simple rules like ‘Bread and Peace model’ gauged prospects of Trump’s victory better than pundits and pollsters. This paper attempts to compile set of indicators, in order to gauge various democracies’ vulnerability to populism, instability and pursuance of ‘illiberal’ projects. Among them, it identifies the gap between consensus assessment of institutional performance (as measured by WGI indicators) and citizens’ subjective assessment (survey based confidence in institutions). Plotting these variables against each other, reveals three clusters of countries – ‘predictable’ (good institutions and high confidence, poor institutions and low confidence), ‘blind’ (poor institutions, high confidence e.g. Uzbekistan or Azerbaijan) and ‘disillusioned’ (good institutions, low confidence e.g. Spain, Chile, Poland and US). It seems that this clustering – carried out separately for various institutions (like legislature, executive and courts) and blended with economic indicators like inequality and living standards (using PCA) – offers reasonably good watchlist of countries, that should ‘expect the unexpected’.Keywords: illiberal democracy, populism, political instability, political risk measurement
Procedia PDF Downloads 2042468 Overview of Fiber Optic Gyroscopes as Ring Laser Gyros and Fiber Optic Gyros and the Comparison Between Them
Authors: M. Abdo, Mohamed Shalaby
Abstract:
A key development in the field of inertial sensors, fiber-optic gyroscopes (FOGs) are currently thought to be a competitive alternative to mechanical gyroscopes for inertial navigation and control applications. For the past few years, research and development efforts have been conducted all around the world using the FOG as a crucial sensor for high-accuracy inertial navigation systems. The main fundamentals of optical gyros were covered in this essay, followed by discussions of the main types of optical gyros and fiber optic gyroscopes and ring laser gyroscopes and comparisons between them. We also discussed different types of fiber optic gyros, including interferometric, resonator, and Brillion fiber optic gyroscopes.Keywords: mechanical gyros, ring laser gyros, interferometric finer optic gyros, Resonator fiber optic gyros
Procedia PDF Downloads 802467 Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images
Authors: Belaynesh Chekol, Numan Çelebi
Abstract:
The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN.Keywords: character recognition, KNN, natural scene image, SIFT
Procedia PDF Downloads 2812466 Localization of Mobile Robots with Omnidirectional Cameras
Authors: Tatsuya Kato, Masanobu Nagata, Hidetoshi Nakashima, Kazunori Matsuo
Abstract:
Localization of mobile robots are important tasks for developing autonomous mobile robots. This paper proposes a method to estimate positions of a mobile robot using an omnidirectional camera on the robot. Landmarks for points of references are set up on a field where the robot works. The omnidirectional camera which can obtain 360 [deg] around images takes photographs of these landmarks. The positions of the robots are estimated from directions of these landmarks that are extracted from the images by image processing. This method can obtain the robot positions without accumulative position errors. Accuracy of the estimated robot positions by the proposed method are evaluated through some experiments. The results show that it can obtain the positions with small standard deviations. Therefore the method has possibilities of more accurate localization by tuning of appropriate offset parameters.Keywords: mobile robots, localization, omnidirectional camera, estimating positions
Procedia PDF Downloads 4422465 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University
Authors: Mukisa Simon Peter Turker, Etomaru Irene
Abstract:
This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors
Procedia PDF Downloads 612464 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing
Authors: Jianan Sun, Ziwen Ye
Abstract:
Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection
Procedia PDF Downloads 1302463 The Complete Modal Derivatives
Authors: Sebastian Andersen, Peter N. Poulsen
Abstract:
The use of basis projection in the structural dynamic analysis is frequently applied. The purpose of the method is to improve the computational efficiency, while maintaining a high solution accuracy, by projection the governing equations onto a small set of carefully selected basis vectors. The present work considers basis projection in kinematic nonlinear systems with a focus on two widely used basis vectors; the system mode shapes and their modal derivatives. Particularly the latter basis vectors are given special attention since only approximate modal derivatives have been used until now. In the present work the complete modal derivatives, derived from perturbation methods, are presented and compared to the previously applied approximate modal derivatives. The correctness of the complete modal derivatives is illustrated by use of an example of a harmonically loaded kinematic nonlinear structure modeled by beam elements.Keywords: basis projection, finite element method, kinematic nonlinearities, modal derivatives
Procedia PDF Downloads 2372462 Extracting an Experimental Relation between SMD, Mass Flow Rate, Velocity and Pressure in Swirl Fuel Atomizers
Authors: Mohammad Hassan Ziraksaz
Abstract:
Fuel atomizers are used in a wide range of IC engines, turbojets and a variety of liquid propellant rocket engines. As the fuel spray fully develops its characters approach their ultimate amounts. Fuel spray characters such as SMD, injection pressure, mass flow rate, droplet velocity and spray cone angle play important roles to atomize the liquid fuel to finely atomized fuel droplets and finally form the fine fuel spray. Well performed, fully developed, fine spray without any defections, brings the idea of finding an experimental relation between the main effective spray characters. Extracting an experimental relation between SMD and other fuel spray physical characters in swirl fuel atomizers is the main scope of this experimental work. Droplet velocity, fuel mass flow rate, SMD and spray cone angle are the parameters which are measured. A set of twelve reverse engineering atomizers without any spray defections and a set of eight original atomizers as referenced well-performed spray are contributed in this work. More than 350 tests, mostly repeated, were performed. This work shows that although spray cone angle plays a very effective role in spray formation, after formation, it smoothly approaches to an almost constant amount while the other characters are changed to create fine droplets. Therefore, the work to find the relation between the characters is focused on SMD, droplet velocity, fuel mass flow rate, and injection pressure. The process of fuel spray formation begins in 5 Psig injection pressures, where a tiny fuel onion attaches to the injector tip and ended in 250 Psig injection pressure, were fully developed fine fuel spray forms. Injection pressure is gradually increased to observe how the spray forms. In each step, all parameters are measured and recorded carefully to provide a data bank. Various diagrams have been drawn to study the behavior of the parameters in more detail. Experiments and graphs show that the power equation can best show changes in parameters. The SMD experimental relation with pressure P, fuel mass flow rate Q ̇ and droplet velocity V extracted individually in pairs. Therefore, the proportional relation of SMD with other parameters is founded. Now it is time to find an experimental relation including all the parameters. Using obtained proportional relation, replacing the parameters with experimentally measured ones and drawing the graphs of experimental SMD versus proportion SMD (〖SMD〗_P), a correctional equation and consequently the final experimental equation is obtained. This experimental equation is specified to use for swirl fuel atomizers and the use of this experimental equation in different conditions shows about 3% error, which is expected to achieve lower error and consequently higher accuracy by increasing the number of experiments and increasing the accuracy of data collection.Keywords: droplet velocity, experimental relation, mass flow rate, SMD, swirl fuel atomizer
Procedia PDF Downloads 1612461 A New Floating Point Implementation of Base 2 Logarithm
Authors: Ahmed M. Mansour, Ali M. El-Sawy, Ahmed T. Sayed
Abstract:
Logarithms reduce products to sums and powers to products; they play an important role in signal processing, communication and information theory. They are primarily used for hardware calculations, handling multiplications, divisions, powers, and roots effectively. There are three commonly used bases for logarithms; the logarithm with base-10 is called the common logarithm, the natural logarithm with base-e and the binary logarithm with base-2. This paper demonstrates different methods of calculation for log2 showing the complexity of each and finds out the most accurate and efficient besides giving in- sights to their hardware design. We present a new method called Floor Shift for fast calculation of log2, and then we combine this algorithm with Taylor series to improve the accuracy of the output, we illustrate that by using two examples. We finally compare the algorithms and conclude with our remarks.Keywords: logarithms, log2, floor, iterative, CORDIC, Taylor series
Procedia PDF Downloads 5322460 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space
Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari
Abstract:
Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.Keywords: amino acid, head space, gas chromatography, total error
Procedia PDF Downloads 1482459 Complex Rigid-Plastic Deformation Model of Tow Degree of Freedom Mechanical System under Impulsive Force
Authors: Abdelouaheb Rouabhi
Abstract:
In order to study the plastic resource of structures, the elastic-plastic single degree of freedom model described by Prandtl diagram is widely used. The generalization of this model to tow degree of freedom beyond the scope of a simple rigid-plastic system allows investigating the plastic resource of structures under complex disproportionate by individual components of deformation (earthquake). This macro-model greatly increases the accuracy of the calculations carried out. At the same time, the implementation of the proposed macro-model calculations easier than the detailed dynamic elastic-plastic calculations existing software systems such as ANSYS.Keywords: elastic-plastic, single degree of freedom model, rigid-plastic system, plastic resource, complex plastic deformation, macro-model
Procedia PDF Downloads 379