Search results for: quantification accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4075

Search results for: quantification accuracy

2575 Insider Theft Detection in Organizations Using Keylogger and Machine Learning

Authors: Shamatha Shetty, Sakshi Dhabadi, Prerana M., Indushree B.

Abstract:

About 66% of firms claim that insider attacks are more likely to happen. The frequency of insider incidents has increased by 47% in the last two years. The goal of this work is to prevent dangerous employee behavior by using keyloggers and the Machine Learning (ML) model. Every keystroke that the user enters is recorded by the keylogging program, also known as keystroke logging. Keyloggers are used to stop improper use of the system. This enables us to collect all textual data, save it in a CSV file, and analyze it using an ML algorithm and the VirusTotal API. Many large companies use it to methodically monitor how their employees use computers, the internet, and email. We are utilizing the SVM algorithm and the VirusTotal API to improve overall efficiency and accuracy in identifying specific patterns and words to automate and offer the report for improved monitoring.

Keywords: cyber security, machine learning, cyclic process, email notification

Procedia PDF Downloads 46
2574 Basket Option Pricing under Jump Diffusion Models

Authors: Ali Safdari-Vaighani

Abstract:

Pricing financial contracts on several underlying assets received more and more interest as a demand for complex derivatives. The option pricing under asset price involving jump diffusion processes leads to the partial integral differential equation (PIDEs), which is an extension of the Black-Scholes PDE with a new integral term. The aim of this paper is to show how basket option prices in the jump diffusion models, mainly on the Merton model, can be computed using RBF based approximation methods. For a test problem, the RBF-PU method is applied for numerical solution of partial integral differential equation arising from the two-asset European vanilla put options. The numerical result shows the accuracy and efficiency of the presented method.

Keywords: basket option, jump diffusion, ‎radial basis function, RBF-PUM

Procedia PDF Downloads 345
2573 Levy Model for Commodity Pricing

Authors: V. Benedico, C. Anacleto, A. Bearzi, L. Brice, V. Delahaye

Abstract:

The aim in present paper is to construct an affordable and reliable commodity prices based on a recalculation of its cost through time which allows visualize the potential risks and thus, take more appropriate decisions regarding forecasts. Here attention has been focused on Levy model, more reliable and realistic than classical random Gaussian one as it takes into consideration observed abrupt jumps in case of sudden price variation. In application to Energy Trading sector where it has never been used before, equations corresponding to Levy model have been written for electricity pricing in European market. Parameters have been set in order to predict and simulate the price and its evolution through time to remarkable accuracy. As predicted by Levy model, the results show significant spikes which reach unconventional levels contrary to currently used Brownian model.

Keywords: commodity pricing, Lévy Model, price spikes, electricity market

Procedia PDF Downloads 420
2572 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 201
2571 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP

Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis

Abstract:

The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.

Keywords: chatbot, depression diagnosis, LSTM model, natural language process

Procedia PDF Downloads 55
2570 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 117
2569 Correlation Matrix for Automatic Identification of Meal-Taking Activity

Authors: Ghazi Bouaziz, Abderrahim Derouiche, Damien Brulin, Hélène Pigot, Eric Campo

Abstract:

Automatic ADL classification is a crucial part of ambient assisted living technologies. It allows to monitor the daily life of the elderly and to detect any changes in their behavior that could be related to health problem. But detection of ADLs is a challenge, especially because each person has his/her own rhythm for performing them. Therefore, we used a correlation matrix to extract custom rules that enable to detect ADLs, including eating activity. Data collected from 3 different individuals between 35 and 105 days allows the extraction of personalized eating patterns. The comparison of the results of the process of eating activity extracted from the correlation matrices with the declarative data collected during the survey shows an accuracy of 90%.

Keywords: elderly monitoring, ADL identification, matrix correlation, meal-taking activity

Procedia PDF Downloads 86
2568 Artificial Intelligence Based Comparative Analysis for Supplier Selection in Multi-Echelon Automotive Supply Chains via GEP and ANN Models

Authors: Seyed Esmail Seyedi Bariran, Laysheng Ewe, Amy Ling

Abstract:

Since supplier selection appears as a vital decision, selecting supplier based on the best and most accurate ways has a lot of importance for enterprises. In this study, a new Artificial Intelligence approach is exerted to remove weaknesses of supplier selection. The paper has three parts. First part is choosing the appropriate criteria for assessing the suppliers’ performance. Next one is collecting the data set based on experts. Afterwards, the data set is divided into two parts, the training data set and the testing data set. By the training data set the best structure of GEP and ANN are selected and to evaluate the power of the mentioned methods the testing data set is used. The result obtained shows that the accuracy of GEP is more than ANN. Moreover, unlike ANN, a mathematical equation is presented by GEP for the supplier selection.

Keywords: supplier selection, automotive supply chains, ANN, GEP

Procedia PDF Downloads 620
2567 Effect of Signal Acquisition Procedure on Imagined Speech Classification Accuracy

Authors: M.R Asghari Bejestani, Gh. R. Mohammad Khani, V.R. Nafisi

Abstract:

Imagined speech recognition is one of the most interesting approaches to BCI development and a lot of works have been done in this area. Many different experiments have been designed and hundreds of combinations of feature extraction methods and classifiers have been examined. Reported classification accuracies range from the chance level to more than 90%. Based on non-stationary nature of brain signals, we have introduced 3 classification modes according to time difference in inter and intra-class samples. The modes can explain the diversity of reported results and predict the range of expected classification accuracies from the brain signal accusation procedure. In this paper, a few samples are illustrated by inspecting results of some previous works.

Keywords: brain computer interface, silent talk, imagined speech, classification, signal processing

Procedia PDF Downloads 142
2566 Development and Evaluation of Simvastatin Based Self Nanoemulsifying Drug Delivery System (SNEDDS) for Treatment of Alzheimer's Disease

Authors: Hardeep

Abstract:

The aim of this research work to improve the solubility and bioavailability of Simvastatin using a self nanoemulsifying drug delivery system (SNEDDS). Self emulsifying property of various oils including essential oils was evaluated with suitable surfactants and co-surfactants. Validation of a method for accuracy, repeatability, Interday and intraday precision, ruggedness, and robustness were within acceptable limits. The liquid SNEDDS was prepared and optimized using a ternary phase diagram, thermodynamic, centrifugation and cloud point studies. The globule size of optimized formulations was less than 200 nm which could be an acceptable nanoemulsion size range. The mean droplet size, drug loading, PDI and zeta potential were found to be 141.0 nm, 92.22%, 0.23 and -10.13 mV and 153.5nm, 93.89 % ,0.41 and -11.7 mV and 164.26 nm, 95.26% , 0.41 and -10.66mV respectively.

Keywords: simvastatin, self nanoemulsifying drug delivery system, solubility, bioavailability

Procedia PDF Downloads 189
2565 Novel GPU Approach in Predicting the Directional Trend of the S&P500

Authors: A. J. Regan, F. J. Lidgey, M. Betteridge, P. Georgiou, C. Toumazou, K. Hayatleh, J. R. Dibble

Abstract:

Our goal is development of an algorithm capable of predicting the directional trend of the Standard and Poor’s 500 index (S&P 500). Extensive research has been published attempting to predict different financial markets using historical data testing on an in-sample and trend basis, with many authors employing excessively complex mathematical techniques. In reviewing and evaluating these in-sample methodologies, it became evident that this approach was unable to achieve sufficiently reliable prediction performance for commercial exploitation. For these reasons, we moved to an out-of-sample strategy based on linear regression analysis of an extensive set of financial data correlated with historical closing prices of the S&P 500. We are pleased to report a directional trend accuracy of greater than 55% for tomorrow (t+1) in predicting the S&P 500.

Keywords: financial algorithm, GPU, S&P 500, stock market prediction

Procedia PDF Downloads 342
2564 Effect of Noise Reducing Headphones on the Short-Term Memory Recall of College Students

Authors: Gregory W. Smith, Paul J. Riccomini

Abstract:

The goal of this empirical inquiry is to explore the effect of noise reducing headphones on the short-term memory recall of college students. Immediately following the presentation (via PowerPoint) of 12 unrelated and randomly selected one- and two-syllable words, students were asked to recall as many words as possible. Using a linear model with conditions marked with binary indicators, we examined the frequency and accuracy of words that were recalled. The findings indicate that for some students, a reduction of noise has a significant positive impact on their ability to recall information. As classrooms become more aurally distracting due to the implementation of cooperative learning activities, these findings highlight the need for a quiet learning environment for some learners.

Keywords: auditory distraction, education, instruction, noise, working memory

Procedia PDF Downloads 325
2563 Estimation of the Pore Electrical Conductivity Using Dielectric Sensors

Authors: Fethi Bouksila, Magnus Persson, Ronny Berndtsson, Akissa Bahri

Abstract:

Under salinity conditions, we evaluate the performance of Hilhost (2000) model to predict pore electrical conductivity ECp from dielectric permittivity and bulk electrical conductivity (ECa) using Time and Frequency Domain Reflectometry sensors (TDR, FDR). Using FDR_WET sensor, RMSE of ECp was 4.15 dS m-1. By replacing the standard soil parameter (K0) in Hilhost model by K0-ECa relationship, the RMSE of ECp decreased to 0.68 dS m-1. WET sensor could give similar accuracy to estimate ECp than TDR if calibrated values of K0 were used instead of standard values in Hilhost model.

Keywords: hilhost model, soil salinity, time domain reflectometry, frequency domain reflectometry, dielectric methods

Procedia PDF Downloads 128
2562 Neural Networks with Different Initialization Methods for Depression Detection

Authors: Tianle Yang

Abstract:

As a common mental disorder, depression is a leading cause of various diseases worldwide. Early detection and treatment of depression can dramatically promote remission and prevent relapse. However, conventional ways of depression diagnosis require considerable human effort and cause economic burden, while still being prone to misdiagnosis. On the other hand, recent studies report that physical characteristics are major contributors to the diagnosis of depression, which inspires us to mine the internal relationship by neural networks instead of relying on clinical experiences. In this paper, neural networks are constructed to predict depression from physical characteristics. Two initialization methods are examined - Xaiver and Kaiming initialization. Experimental results show that a 3-layers neural network with Kaiming initialization achieves 83% accuracy.

Keywords: depression, neural network, Xavier initialization, Kaiming initialization

Procedia PDF Downloads 120
2561 Mobile Health Approaches in the Management of Breast Cancer: A Qualitative Content Analysis

Authors: Hyekyung Woo, Gwihyun Kim

Abstract:

mHealth, which encompasses mobile health technologies and interventions, is rapidly evolving in various medical specialties, and its impact is evident in oncology. This review describes current trends in research addressing the integration of mHealth into the management of breast cancer by examining evaluations of mHealth and its contributions across the cancer care continuum. Mobile technologies are perceived as effective in prevention and as feasible for managing breast cancer, but the diagnostic accuracy of these tools remains in doubt. Not all phases of breast cancer treatment involve mHealth, and not all have been addressed by research. These drawbacks in the application of mHealth to breast cancer management call for intensified research to strengthen its role in breast cancer care.

Keywords: mobile application, breast cancer, content analysis, mHealth

Procedia PDF Downloads 302
2560 A Type-2 Fuzzy Model for Link Prediction in Social Network

Authors: Mansoureh Naderipour, Susan Bastani, Mohammad Fazel Zarandi

Abstract:

Predicting links that may occur in the future and missing links in social networks is an attractive problem in social network analysis. Granular computing can help us to model the relationships between human-based system and social sciences in this field. In this paper, we present a model based on granular computing approach and Type-2 fuzzy logic to predict links regarding nodes’ activity and the relationship between two nodes. Our model is tested on collaboration networks. It is found that the accuracy of prediction is significantly higher than the Type-1 fuzzy and crisp approach.

Keywords: social network, link prediction, granular computing, type-2 fuzzy sets

Procedia PDF Downloads 316
2559 Forward Conditional Restricted Boltzmann Machines for the Generation of Music

Authors: Johan Loeckx, Joeri Bultheel

Abstract:

Recently, the application of deep learning to music has gained popularity. Its true potential, however, has been largely unexplored. In this paper, a new idea for representing the dynamic behavior of music is proposed. A ”forward” conditional RBM takes into account not only preceding but also future samples during training. Though this may sound controversial at first sight, it will be shown that it makes sense from a musical and neuro-cognitive perspective. The model is applied to reconstruct music based upon the first notes and to improvise in the musical style of a composer. Different to expectations, reconstruction accuracy with respect to a regular CRBM with the same order, was not significantly improved. More research is needed to test the performance on unseen data.

Keywords: deep learning, restricted boltzmann machine, music generation, conditional restricted boltzmann machine (CRBM)

Procedia PDF Downloads 516
2558 Application of Scanning Electron Microscopy and X-Ray Evaluation of the Main Digestion Methods for Determination of Macroelements in Plant Tissue

Authors: Krasimir I. Ivanov, Penka S. Zapryanova, Stefan V. Krustev, Violina R. Angelova

Abstract:

Three commonly used digestion methods (dry ashing, acid digestion, and microwave digestion) in different variants were compared for digestion of tobacco leaves. Three main macroelements (K, Ca and Mg) were analysed using AAS Spectrometer Spectra АА 220, Varian, Australia. The accuracy and precision of the measurements were evaluated by using Polish reference material CTR-VTL-2 (Virginia tobacco leaves). To elucidate the problems with elemental recovery X-Ray and SEM–EDS analysis of all residues after digestion were performed. The X-ray investigation showed a formation of KClO4 when HClO4 was used as a part of the acids mixture. The use of HF at Ca and Mg determination led to the formation of CaF2 and MgF2. The results were confirmed by energy dispersive X-ray microanalysis. SPSS program for Windows was used for statistical data processing.

Keywords: digestion methods, plant tissue, determination of macroelements, K, Ca, Mg

Procedia PDF Downloads 305
2557 Improved FP-Growth Algorithm with Multiple Minimum Supports Using Maximum Constraints

Authors: Elsayeda M. Elgaml, Dina M. Ibrahim, Elsayed A. Sallam

Abstract:

Association rule mining is one of the most important fields of data mining and knowledge discovery. In this paper, we propose an efficient multiple support frequent pattern growth algorithm which we called “MSFP-growth” that enhancing the FP-growth algorithm by making infrequent child node pruning step with multiple minimum support using maximum constrains. The algorithm is implemented, and it is compared with other common algorithms: Apriori-multiple minimum supports using maximum constraints and FP-growth. The experimental results show that the rule mining from the proposed algorithm are interesting and our algorithm achieved better performance than other algorithms without scarifying the accuracy.

Keywords: association rules, FP-growth, multiple minimum supports, Weka tool

Procedia PDF Downloads 474
2556 Numerical Investigation of the Electromagnetic Common Rail Injector Characteristics

Authors: Rafal Sochaczewski, Ksenia Siadkowska, Tytus Tulwin

Abstract:

The paper describes the modeling of a fuel injector for common rail systems. A one-dimensional model of a solenoid-valve-controlled injector with Valve Closes Orifice (VCO) spray was modelled in the AVL Hydsim. This model shows the dynamic phenomena that occur in the injector. The accuracy of the calibration, based on a regulation of the parameters of the control valve and the nozzle needle lift, was verified by comparing the numerical results of injector flow rate. Our model is capable of a precise simulation of injector operating parameters in relation to injection time and fuel pressure in a fuel rail. As a result, there were made characteristics of the injector flow rate and backflow.

Keywords: common rail, diesel engine, fuel injector, modeling

Procedia PDF Downloads 402
2555 A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images

Authors: Firas Gerges, Frank Y. Shih

Abstract:

Malignant melanoma, known simply as melanoma, is a type of skin cancer that appears as a mole on the skin. It is critical to detect this cancer at an early stage because it can spread across the body and may lead to the patient's death. When detected early, melanoma is curable. In this paper, we propose a deep learning model (convolutional neural networks) in order to automatically classify skin lesion images as malignant or benign. Images underwent certain pre-processing steps to diminish the effect of the normal skin region on the model. The result of the proposed model showed a significant improvement over previous work, achieving an accuracy of 97%.

Keywords: deep learning, skin cancer, image processing, melanoma

Procedia PDF Downloads 132
2554 Age Estimation from Upper Anterior Teeth by Pulp/Tooth Ratio Using Peri-Apical X-Rays among Egyptians

Authors: Fatma Mohamed Magdy Badr El Dine, Amr Mohamed Abd Allah

Abstract:

Introduction: Age estimation of individuals is one of the crucial steps in forensic practice. Different traditional methods rely on the length of the diaphysis of long bones of limbs, epiphyseal-diaphyseal union, fusion of the primary ossification centers as well as dental eruption. However, there is a growing need for the development of precise and reliable methods to estimate age, especially in cases where dismembered corpses, burnt bodies, purified or fragmented parts are recovered. Teeth are the hardest and indestructible structure in the human body. In recent years, assessment of pulp/tooth area ratio, as an indirect quantification of secondary dentine deposition has received a considerable attention. However, scanty work has been done in Egypt in terms of applicability of pulp/tooth ratio for age estimation. Aim of the Work: The present work was designed to assess the Cameriere’s method for age estimation from pulp/tooth ratio of maxillary canines, central and lateral incisors among a sample from Egyptian population. In addition, to formulate regression equations to be used as population-based standards for age determination. Material and Methods: The present study was conducted on 270 peri-apical X-rays of maxillary canines, central and lateral incisors (collected from 131 males and 139 females aged between 19 and 52 years). The pulp and tooth areas were measured using the Adobe Photoshop software program and the pulp/tooth area ratio was computed. Linear regression equations were determined separately for canines, central and lateral incisors. Results: A significant correlation was recorded between the pulp/tooth area ratio and the chronological age. The linear regression analysis revealed a coefficient of determination (R² = 0.824 for canine, 0.588 for central incisor and 0.737 for lateral incisor teeth). Three regression equations were derived. Conclusion: As a conclusion, the pulp/tooth ratio is a useful technique for estimating age among Egyptians. Additionally, the regression equation derived from canines gave better result than the incisors.

Keywords: age determination, canines, central incisors, Egypt, lateral incisors, pulp/tooth ratio

Procedia PDF Downloads 178
2553 Parallel Computation of the Covariance-Matrix

Authors: Claude Tadonki

Abstract:

We address the issues related to the computation of the covariance matrix. This matrix is likely to be ill conditioned following its canonical expression, thus consequently raises serious numerical issues. The underlying linear system, which therefore should be solved by means of iterative approaches, becomes computationally challenging. A huge number of iterations is expected in order to reach an acceptable level of convergence, necessary to meet the required accuracy of the computation. In addition, this linear system needs to be solved at each iteration following the general form of the covariance matrix. Putting all together, its comes that we need to compute as fast as possible the associated matrix-vector product. This is our purpose in the work, where we consider and discuss skillful formulations of the problem, then propose a parallel implementation of the matrix-vector product involved. Numerical and performance oriented discussions are provided based on experimental evaluations.

Keywords: covariance-matrix, multicore, numerical computing, parallel computing

Procedia PDF Downloads 306
2552 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 108
2551 Design and Development of an Algorithm to Predict Fluctuations of Currency Rates

Authors: Nuwan Kuruwitaarachchi, M. K. M. Peiris, C. N. Madawala, K. M. A. R. Perera, V. U. N Perera

Abstract:

Dealing with businesses with the foreign market always took a special place in a country’s economy. Political and social factors came into play making currency rate changes fluctuate rapidly. Currency rate prediction has become an important factor for larger international businesses since large amounts of money exchanged between countries. This research focuses on comparing the accuracy of mainly three models; Autoregressive Integrated Moving Average (ARIMA), Artificial Neural Networks(ANN) and Support Vector Machines(SVM). series of data import, export, USD currency exchange rate respect to LKR has been selected for training using above mentioned algorithms. After training the data set and comparing each algorithm, it was able to see that prediction in SVM performed better than other models. It was improved more by combining SVM and SVR models together.

Keywords: ARIMA, ANN, FFNN, RMSE, SVM, SVR

Procedia PDF Downloads 197
2550 Calculating Shear Strength Parameter from Simple Shear Apparatus

Authors: G. Nitesh

Abstract:

The shear strength of soils is a crucial parameter instability analysis. Therefore, it is important to determine reliable values for the accuracy of stability analysis. Direct shear tests are mostly performed to determine the shear strength of cohesionless soils. The major limitation of the direct shear test is that the failure takes place through the pre-defined failure plane but the failure is not along pre-defined plane and is along the weakest plane in actual shearing mechanism that goes on in the field. This leads to overestimating the strength parameter; hence, a new apparatus called simple shear is developed and used in this study to determine the shear strength parameter that simulates the field conditions.

Keywords: direct shear, simple shear, angle of shear resistance, cohesionless soils

Procedia PDF Downloads 407
2549 The Antecedent Factor Affecting the Entrepreneurs’ Decision Making for Using Accounting Office Service in Chiang Mai Province

Authors: Nawaporn Thongnut

Abstract:

The objective was to study the process and how to prepare the accounting of the Thai temples and to study the performance and quality in the accounting preparation of the temples in accordance with the regulation. The population was the accountants and individuals involved in the accounting preparation of 17 temples in the suburban Bangkok. The measurement used in this study was questionnaire. The statistics used in the analysis are the descriptive statistic. The data was presented in the form of percentage tables to describe the data on the demographic characteristics. The study found that temple wardens were responsible for the accounting and reporting of the temples. Abbots were to check the accuracy of the accounts in the monasteries. Mostly, there was no account auditing of the monasteries from the outside. The practice when receiving income for most of the monasteries had been keeping financial document in an orderly manner.

Keywords: corporate social responsibility, creating shared value, management accountant’s roles, stock exchange of Thailand

Procedia PDF Downloads 224
2548 Prediction of Bariatric Surgery Publications by Using Different Machine Learning Algorithms

Authors: Senol Dogan, Gunay Karli

Abstract:

Identification of relevant publications based on a Medline query is time-consuming and error-prone. An all based process has the potential to solve this problem without any manual work. To the best of our knowledge, our study is the first to investigate the ability of machine learning to identify relevant articles accurately. 5 different machine learning algorithms were tested using 23 predictors based on several metadata fields attached to publications. We find that the Boosted model is the best-performing algorithm and its overall accuracy is 96%. In addition, specificity and sensitivity of the algorithm is 97 and 93%, respectively. As a result of the work, we understood that we can apply the same procedure to understand cancer gene expression big data.

Keywords: prediction of publications, machine learning, algorithms, bariatric surgery, comparison of algorithms, boosted, tree, logistic regression, ANN model

Procedia PDF Downloads 199
2547 Bidirectional Encoder Representations from Transformers Sentiment Analysis Applied to Three Presidential Pre-Candidates in Costa Rica

Authors: Félix David Suárez Bonilla

Abstract:

A sentiment analysis service to detect polarity (positive, neural, and negative), based on transfer learning, was built using a Spanish version of BERT and applied to tweets written in Spanish. The dataset that was used consisted of 11975 reviews, which were extracted from Google Play using the google-play-scrapper package. The BETO trained model used: the AdamW optimizer, a batch size of 16, a learning rate of 2x10⁻⁵ and 10 epochs. The system was tested using tweets of three presidential pre-candidates from Costa Rica. The system was finally validated using human labeled examples, achieving an accuracy of 83.3%.

Keywords: NLP, transfer learning, BERT, sentiment analysis, social media, opinion mining

Procedia PDF Downloads 164
2546 Reductive Control in the Management of Redundant Actuation

Authors: Mkhinini Maher, Knani Jilani

Abstract:

We present in this work the performances of a mobile omnidirectional robot through evaluating its management of the redundancy of actuation. Thus we come to the predictive control implemented. The distribution of the wringer on the robot actions, through the inverse pseudo of Moore-Penrose, corresponds to a -geometric- distribution of efforts. We will show that the load on vehicle wheels would not be equi-distributed in terms of wheels configuration and of robot movement. Thus, the threshold of sliding is not the same for the three wheels of the vehicle. We suggest exploiting the redundancy of actuation to reduce the risk of wheels sliding and to ameliorate, thereby, its accuracy of displacement. This kind of approach was the subject of study for the legged robots.

Keywords: mobile robot, actuation, redundancy, omnidirectional, inverse pseudo moore-penrose, reductive control

Procedia PDF Downloads 502