Search results for: Bayes intervals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 722

Search results for: Bayes intervals

692 Performance Analysis with the Combination of Visualization and Classification Technique for Medical Chatbot

Authors: Shajida M., Sakthiyadharshini N. P., Kamalesh S., Aswitha B.

Abstract:

Natural Language Processing (NLP) continues to play a strategic part in complaint discovery and medicine discovery during the current epidemic. This abstract provides an overview of performance analysis with a combination of visualization and classification techniques of NLP for a medical chatbot. Sentiment analysis is an important aspect of NLP that is used to determine the emotional tone behind a piece of text. This technique has been applied to various domains, including medical chatbots. In this, we have compared the combination of the decision tree with heatmap and Naïve Bayes with Word Cloud. The performance of the chatbot was evaluated using accuracy, and the results indicate that the combination of visualization and classification techniques significantly improves the chatbot's performance.

Keywords: sentimental analysis, NLP, medical chatbot, decision tree, heatmap, naïve bayes, word cloud

Procedia PDF Downloads 48
691 A Supervised Approach for Detection of Singleton Spam Reviews

Authors: Atefeh Heydari, Mohammadali Tavakoli, Naomie Salim

Abstract:

In recent years, we have witnessed that online reviews are the most important source of customers’ opinion. They are progressively more used by individuals and organisations to make purchase and business decisions. Unfortunately, for the reason of profit or fame, frauds produce deceptive reviews to hoodwink potential customers. Their activities mislead not only potential customers to make appropriate purchasing decisions and organisations to reshape their business, but also opinion mining techniques by preventing them from reaching accurate results. Spam reviews could be divided into two main groups, i.e. multiple and singleton spam reviews. Detecting a singleton spam review that is the only review written by a user ID is extremely challenging due to lack of clue for detection purposes. Singleton spam reviews are very harmful and various features and proofs used in multiple spam reviews detection are not applicable in this case. Current research aims to propose a novel supervised technique to detect singleton spam reviews. To achieve this, various features are proposed in this study and are to be combined with the most appropriate features extracted from literature and employed in a classifier. In order to compare the performance of different classifiers, SVM and naive Bayes classification algorithms were used for model building. The results revealed that SVM was more accurate than naive Bayes and our proposed technique is capable to detect singleton spam reviews effectively.

Keywords: classification algorithms, Naïve Bayes, opinion review spam detection, singleton review spam detection, support vector machine

Procedia PDF Downloads 280
690 Reexamining Contrarian Trades as a Proxy of Informed Trades: Evidence from China's Stock Market

Authors: Dongqi Sun, Juan Tao, Yingying Wu

Abstract:

This paper reexamines the appropriateness of contrarian trades as a proxy of informed trades, using high frequency Chinese stock data. Employing this measure for 5 minute intervals, a U-shaped intraday pattern of probability of informed trades (PIN) is found for the CSI300 stocks, which is consistent with previous findings for other markets. However, while dividing the trades into different sizes, a reversed U-shaped PIN from large-sized trades, opposed to the U-shaped pattern for small- and medium-sized trades, is observed. Drawing from the mixed evidence with different trade sizes, the price impact of trades is further investigated. By examining the relationship between trade imbalances and unexpected returns, larges-sized trades are found to have significant price impact. This implies that in those intervals with large trades, it is non-contrarian trades that are more likely to be informed trades. Taking account of the price impact of large-sized trades, non-contrarian trades are used to proxy for informed trading in those intervals with large trades, and contrarian trades are still used to measure informed trading in other intervals. A stronger U-shaped PIN is demonstrated from this modification. Auto-correlation and information advantage tests for robustness also support the modified informed trading measure.

Keywords: contrarian trades, informed trading, price impact, trade imbalance

Procedia PDF Downloads 140
689 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling

Authors: Negar Riazifar, Nigel G. Stocks

Abstract:

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals do not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.

Keywords: level crossing sampling, numerical stability, speech processing, trigonometric polynomial

Procedia PDF Downloads 126
688 Effects of Irrigation Intervals on Antioxidant Enzyme Activity in Black Carrot Leaves (Daucus carota L.)

Authors: Hakan Arslan, Deniz Ekinci, Alper Gungor, Gurkan Bilir, Omer Tas, Mehmet Altun

Abstract:

Drought is one of the major abiotic stresses affecting the agricultural production worldwide. In this study, Leaf samples were taken from the carrot plants grown under drought stress conditions during the harvesting period. The plants were irrigated in three irrigation interval (4, 6 and 8 days) and Irrigation water regime was set up in pots. The changes in activities of antioxidant enzymes such as glutathione reductase (GR), glutathione s-transferase (GST), superoxide dismutase (SOD)) in leaves of black carrot were investigated. The activities of antioxidant enzymes (GR, GST, SOD) were varied significantly with irrigation intervals. The highest value of GR, GST and SOD were determined in the irrigation interval of 6 days. All antioxidant activity values were decreased in 8 days of irrigation interval. As a result of the study, it has been suggested that optimum irrigation intervals for plants can be used in antioxidant enzymes.

Keywords: antioxidant enzyme, carrot, drought, irrigation interval

Procedia PDF Downloads 188
687 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 19
686 Early Stage Suicide Ideation Detection Using Supervised Machine Learning and Neural Network Classifier

Authors: Devendra Kr Tayal, Vrinda Gupta, Aastha Bansal, Khushi Singh, Sristi Sharma, Hunny Gaur

Abstract:

In today's world, suicide is a serious problem. In order to save lives, early suicide attempt detection and prevention should be addressed. A good number of at-risk people utilize social media platforms to talk about their issues or find knowledge on related chores. Twitter and Reddit are two of the most common platforms that are used for expressing oneself. Extensive research has already been done in this field. Through supervised classification techniques like Nave Bayes, Bernoulli Nave Bayes, and Multiple Layer Perceptron on a Reddit dataset, we demonstrate the early recognition of suicidal ideation. We also performed comparative analysis on these approaches and used accuracy, recall score, F1 score, and precision score for analysis.

Keywords: machine learning, suicide ideation detection, supervised classification, natural language processing

Procedia PDF Downloads 65
685 Utilization of Two Kind of Recycling Greywater in Irrigation of Syngonium SP. Plants Grown Under Different Water Regime

Authors: Sami Ali Metwally, Bedour Helmy Abou-Leila, Hussien I.Abdel-Shafy

Abstract:

The work was carried out at the greenhouse of National Research Centre, Pot experiment was carried out during of 2020 and 2021 seasons aimed to study the effect of two types of water (two recycling gray water treatments((SMR (Sequencing Batch Reactor) and MBR(Membrane Biology Reactor) and three watering intervals 15, 20 and 25 days on Syangonium plants growth. Examination of data cleared that, (MBR) recorded increase in vegetative growth parameters, osmotic pressure, transpiration rate chlorophyll a,b,carotenoids and carbohydrate)in compared with SBR.As for water, intervalsthe highest values of most growth parameters were obtained from plants irrigated with after (20 days) compared with other treatments.15 days irrigation intervals recorded significantly increased in osmotic pressure, transpiration rate and photosynthetic pigments, while carbohydrate values recorded decreased. Interaction between water type and water intervals(SBR) recorded the highest values of most growth parameters by irrigation after 20 days. While the treatment (MBR)and irrigated after 25 days showed the highest values on leaf area and leaves fresh weight compared with other treatments.

Keywords: grey water, water intervals, Syngonium plant, recycling water, vegetative growth

Procedia PDF Downloads 81
684 Application of a Geomechanical Model to Justify the Exploitation of Bazhenov-Abalak Formation, Western Siberia

Authors: Yan Yusupov, Aleksandra Soldatova, Yaroslav Zaglyadin

Abstract:

The object of this work is Bazhenov-Abalak unconventional formation (BAUF) of Western Siberia. On the base of the Geomechanical model (GMM), a methodology was developed for sweet spot intervals and zones for drilling horizontal wells with hydraulic fracturing. Based on mechanical rock typification, eight mechanical rock types (MRT) have been identified. Sweet spot intervals are represented by siliceous-carbonate (2), siliceous (5) and carbonate (8) MRT that have the greatest brittleness index (BRIT). A correlation has been established between the thickness of brittle intervals and the initial well production rates, which makes it possible to identify sweet spot zones for drilling horizontal wells with hydraulic fracturing. Brittle and ductile intervals are separated by a BRIT cut-off of 0.4 since wells located at points with BRIT < 0.4 have insignificant rates (less than 2 m³/day). Wells with an average BRIT in BAUF of more than 0.4 reach industrial production rates. The next application of GMM is associated with the instability of the overburdened clay formation above the top of the BAUF. According to the wellbore stability analysis, the recommended mud weight for this formation must be not less than 1.53–1.55 g/cc. The optimal direction for horizontal wells corresponds to the azimuth of Shmin equal to 70-80°.

Keywords: unconventional reservoirs, geomechanics, sweet spot zones, borehole stability

Procedia PDF Downloads 31
683 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.

Keywords: regression, piecewise, Bayesian, reversible Jump MCMC

Procedia PDF Downloads 490
682 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 359
681 A Study of Permission-Based Malware Detection Using Machine Learning

Authors: Ratun Rahman, Rafid Islam, Akin Ahmed, Kamrul Hasan, Hasan Mahmud

Abstract:

Malware is becoming more prevalent, and several threat categories have risen dramatically in recent years. This paper provides a bird's-eye view of the world of malware analysis. The efficiency of five different machine learning methods (Naive Bayes, K-Nearest Neighbor, Decision Tree, Random Forest, and TensorFlow Decision Forest) combined with features picked from the retrieval of Android permissions to categorize applications as harmful or benign is investigated in this study. The test set consists of 1,168 samples (among these android applications, 602 are malware and 566 are benign applications), each consisting of 948 features (permissions). Using the permission-based dataset, the machine learning algorithms then produce accuracy rates above 80%, except the Naive Bayes Algorithm with 65% accuracy. Of the considered algorithms TensorFlow Decision Forest performed the best with an accuracy of 90%.

Keywords: android malware detection, machine learning, malware, malware analysis

Procedia PDF Downloads 123
680 Agronomic Evaluation of Flax Cultivars (Linum Usitatissimum L.) in Response to Irrigation Intervals

Authors: Emad Rashwan, M. Mousa, Ayman EL Sabagh, Celaleddin Barutcular

Abstract:

Flax is a potential winter crop for Egypt that can be grown for both seed and fiber. The study was conducted during two successive winter seasons of 2013/2014, and 2014/2015 in the experimental farm of El-Gemmeiza Agricultural Research Station, Agriculture research Centre, Egypt. The objective of this work was to evaluate the effect of irrigation intervals (25, 35 and 45) on the seed yield and quality of flax cultivars (Sakha1, Giza9 and Giza10). Obtained results indicate that highly significant for all studied traits among irrigation intervals except oil percentage that was not significant in both seasons. Irrigated flax plants every 35 days gave the maximum values for all characters. In contrast, irrigation every 45 days gave the minimum values for all studied characters under this study. In respect to cultivars, significant differences in most yield and quality characters were found. Furthermore, the performance of Sakha1 cultivar was superior in total plant height, main stem diameter, seed index, seed, oil, biological and straw yield /ha as well as fiber length and fiber fineness. Meanwhile, Giza9 and Giza10 cultivars were surpassed in fiber yield/hand fiber percentage, respectively. The interactions between irrigation intervals and flax cultivars were highly significant for total plant height, main stem diameter, seed, oil, biological and straw yields /ha. Based on the results, all flax cultivars recorded the maximum values for major traits were measured under irrigation of flax plants every 35 days.

Keywords: flax, fiber, irrigation intervals, oil, seed yield

Procedia PDF Downloads 229
679 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation

Procedia PDF Downloads 342
678 Intensity Analysis to Link Changes in Land-Use Pattern in the Abuakwa North and South Municipalities, Ghana, from 1986 to 2017

Authors: Isaac Kwaku Adu, Jacob Doku Tetteh, John Joseph Puthenkalam, Kwabena Effah Antwi

Abstract:

The continuous increase in population implies increase in food demand. There is, therefore, the need to increase agricultural production and other forest products to ensure food security and economic development. This paper employs the three-level intensity analysis to assess the total change of land-use in two-time intervals (1986-2002 and 2002-2017), the net change and swap as well as gross gains and losses in the two intervals. The results revealed that the overall change in the 31-year period was greater in the second period (2002-2017). Agriculture and forest categories lost in the first period while the other land class gained. However, in the second period agriculture and built-up increased greatly while forest, water bodies and thick bushes/shrubland experienced loss. An assessment revealed a reduction of forest in both periods but was greater in the second period and expansion of agricultural land was recorded as population increases. The pixels gaining built-up targeted agricultural land in both intervals, it also targeted thick bushes/shrubland and waterbody in the second period only. Built-up avoided forest in both intervals as well as waterbody and thick bushes/shrubland. To help in developing the best land-use strategies/policies, a further validation of the social factors is necessary.

Keywords: agricultural land, forest, Ghana, land-use, intensity analysis, remote sensing

Procedia PDF Downloads 117
677 Quasi–Periodicity of Tonic Intervals in Octave and Innovation of Themes in Music Compositions

Authors: R. C. Tyagi

Abstract:

Quasi-periodicity of frequency intervals observed in Shruti based Absolute Scale of Music has been used to graphically identify the Anchor notes ‘Vadi’ and ‘Samvadi’ which are nodal points for expansion, elaboration and iteration of the emotional theme represented by the characteristic tonic arrangement in Raga compositions. This analysis leads to defining the Tonic parameters in the octave including the key-note frequency, tonic intervals’ anchor notes and the on-set and range of quasi-periodicities as exponents of 2. Such uniformity of representation of characteristic data would facilitate computational analysis and synthesis of music compositions and also help develop noise suppression techniques. Criteria for tuning of strings for compatibility with placement of frets on finger boards is discussed. Natural Rhythmic cycles in music compositions are analytically shown to lie between 3 and 126 beats.

Keywords: absolute scale, anchor notes, computational analysis, frets, innovation, noise suppression, Quasi-periodicity, rhythmic cycle, tonic interval, Shruti

Procedia PDF Downloads 280
676 Safety Effect of Smart Right-Turn Design at Intersections

Authors: Upal Barua

Abstract:

The risk of severe crashes at high-speed right-turns at intersections is a major safety concern these days. The application of a smart right-turn at an intersection is increasing day by day to address is an issue. The design, ‘Smart Right-turn’ consists of a narrow-angle of channelization at approximately 70°. This design increases the cone of vision of the right-tuning drivers towards the crossing pedestrians as well as traffic on the cross-road. As part of the Safety Improvement Program in Austin Transportation Department, several smart right-turns were constructed at high crash intersections where high-speed right-turns were found to be a contributing factor. This paper features the state of the art techniques applied in planning, engineering, designing and construction of this smart right-turn, key factors driving the success, and lessons learned in the process. This paper also presents the significant crash reductions achieved from the application of this smart right-turn design using Empirical Bayes method. The result showed that smart right-turns can reduce overall right-turn crashes by 43% and severe right-turn crashes by 70%.

Keywords: smart right-turn, intersection, cone of vision, empirical Bayes method

Procedia PDF Downloads 236
675 Sentiment Analysis of Ensemble-Based Classifiers for E-Mail Data

Authors: Muthukumarasamy Govindarajan

Abstract:

Detection of unwanted, unsolicited mails called spam from email is an interesting area of research. It is necessary to evaluate the performance of any new spam classifier using standard data sets. Recently, ensemble-based classifiers have gained popularity in this domain. In this research work, an efficient email filtering approach based on ensemble methods is addressed for developing an accurate and sensitive spam classifier. The proposed approach employs Naive Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA) as base classifiers along with different ensemble methods. The experimental results show that the ensemble classifier was performing with accuracy greater than individual classifiers, and also hybrid model results are found to be better than the combined models for the e-mail dataset. The proposed ensemble-based classifiers turn out to be good in terms of classification accuracy, which is considered to be an important criterion for building a robust spam classifier.

Keywords: accuracy, arcing, bagging, genetic algorithm, Naive Bayes, sentiment mining, support vector machine

Procedia PDF Downloads 114
674 Human Skin Identification Using a Specific mRNA Marker at Different Storage Durations

Authors: Abla A. Ali, Heba A. Abd El Razik, Nadia A. Kotb, Amany A. Bayoumi, Laila A. Rashed

Abstract:

The detection of human skin through mRNA-based profiling is a very useful tool for forensic investigations. The aim of this study was definitive identification of human skin at different time intervals using an mRNA marker late cornified envelope gene 1C. Ten middle-aged healthy volunteers of both sexes were recruited for this study. Skin samples controlled with blood samples were taken from the candidates to test for the presence of our targeted mRNA marker. Samples were kept at dry dark conditions to be tested at different time intervals (24 hours, one week, three weeks and four weeks) for detection and relative quantification of the targeted marker by RT PCR. The targeted marker could not be detected in blood samples. The targeted marker showed the highest mean value after 24 hours (11.90 ± 2.42) and the lowest mean value (7.56 ± 2.56) after three weeks. No marker could be detected at four weeks. This study verified the high specificity and sensitivity of mRNA marker in the skin at different storage times up to three weeks under the study conditions.

Keywords: human skin, late cornified envelope gene 1C, mRNA marker, time intervals

Procedia PDF Downloads 139
673 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading

Authors: Peter Shi

Abstract:

Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.

Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market

Procedia PDF Downloads 46
672 Applied Complement of Probability and Information Entropy for Prediction in Student Learning

Authors: Kennedy Efosa Ehimwenma, Sujatha Krishnamoorthy, Safiya Al‑Sharji

Abstract:

The probability computation of events is in the interval of [0, 1], which are values that are determined by the number of outcomes of events in a sample space S. The probability Pr(A) that an event A will never occur is 0. The probability Pr(B) that event B will certainly occur is 1. This makes both events A and B a certainty. Furthermore, the sum of probabilities Pr(E₁) + Pr(E₂) + … + Pr(Eₙ) of a finite set of events in a given sample space S equals 1. Conversely, the difference of the sum of two probabilities that will certainly occur is 0. This paper first discusses Bayes, the complement of probability, and the difference of probability for occurrences of learning-events before applying them in the prediction of learning objects in student learning. Given the sum of 1; to make a recommendation for student learning, this paper proposes that the difference of argMaxPr(S) and the probability of student-performance quantifies the weight of learning objects for students. Using a dataset of skill-set, the computational procedure demonstrates i) the probability of skill-set events that have occurred that would lead to higher-level learning; ii) the probability of the events that have not occurred that requires subject-matter relearning; iii) accuracy of the decision tree in the prediction of student performance into class labels and iv) information entropy about skill-set data and its implication on student cognitive performance and recommendation of learning.

Keywords: complement of probability, Bayes’ rule, prediction, pre-assessments, computational education, information theory

Procedia PDF Downloads 129
671 Predictability of Pupil Mydriasis as a Biomarker for Diabetes

Authors: Naveen Kumar Challa, Pavan Verıkıcherla, Madhubalan, Ashısh Sharma

Abstract:

Aim: Aim of the study was to find whether any difference exists in pupil mydriasis measured with Orbscan in non-diabetic and type 2 diabetic patients at various intervals after installation of Tropicamide 0.8% and Phenylephrine 5%. Methods: the Observational study conducted at a tertiary care eye hospital during September 2014 to March 2015. 240 eyes from 120 patients (40 non-diabetic, 80 diabetic) were dilated with Tropicamide 0.8% and Phenylephrine 5%. One drop of a drug was installed twice. The second drop is installed at 20 minutes after installation of the first drop. In two groups’ pupil diameter was measured before installation of drops and also at 15, 30, 45 and 60 minutes after installation of the first drop using both Orbscan. Result: Mean age of the non-diabetic group is 48.67 ± 7.93 years; Diabetic group is 59.97 ± 8.77 years. Mean duration of Diabetes was 7.01 ± 5.05 years. Mean pupil diameter measured with Orbscan before installation of the drops and also at 15, 30, 45 and 60 minutes after installation of first drop in non-diabetic group was 4.18 ± 0.64mm, 6.15 ± 0.41mm, 7.76 ±0.34, 9.59 ± 0.30, and 9.97 ± 0.10 mm respectively and for the diabetic group it was 4.00 ± 0.56 mm, 5.53 ± 0.52 mm, 7.018 ± 0.58mm, 8.25±0.51mm and 9.18 ± 0.46mm respectively. The mean difference between the mean pupil diameters of the non-diabetic and diabetic group shows a significant difference (P< 0.01) at all intervals except before dilatation. There is a significant negative correlation (r = 0.78 – 0.92) between the duration of diabetes and pupil dilatation at all intervals after installation of the drops. There is also significant difference (P< 0.005) in the mean values of pupil diameter between non retinopathy diabetic subjects and diabetic retinopathy subjects at all intervals after installation of drops. Conclusion: People attending eye clinic, whose pupil mydriasis values falls below the normal may be referred for diabetic evaluation. If normative data is established for the pupil size in Indian population using Orbscan then the values fall under normative data could be a predictor for diabetes. This would in turn help ophthalmologist to detect the diabetes at an early stage and prevent the complications resulting from the diabetes.

Keywords: diabetes mellitus, pupil diameter, orbscan, tropicamide

Procedia PDF Downloads 494
670 Effect of Different Irrigation Intervals on Protein and Gel Production of Aloe Vera (Aloe Barbadensis M.) in Iran

Authors: Seyed Mohammad Hosein Al Omrani Nejad, Ali Rezvani Aghdam

Abstract:

This study was done in order to evaluation different irrigation intervals on amount of protein, and gel production in Aloe vera, a traditional medicinal plant. Plants was plnted in Greenhouse and irrigated according to Accumulative Pan Evaporation(APE). The treatments were included 20, 40, 60, 80, 100, 120, 140, 160, 180, and 200 mm APE which has been showed W1,W2, W3, W4, W5, W6, W7, W8,W9 and W10 respectively.The amount of protein and gel production was measured seperately. Results showed that highest protein and fresh weight of gel obtained plants which irrigated W6 and W7 respectively. According to these results can recomend which if plant irrigatedwhen APE reached 120 and 140 mm by Class A Evaporation Pan method gel production and protein would besuitable in north of khozestan province in limited irrigation conditions.

Keywords: irrigation, protein, gel, aloe vera, Iran

Procedia PDF Downloads 359
669 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 61
668 Analysis of Sweat Evaporation and Heat Transfer on Skin Surface: A Pointwise Numerical Study

Authors: Utsav Swarnkar, Rabi Pathak, Rina Maiti

Abstract:

This study aims to investigate the thermoregulatory role of sweating by comprehensively analyzing the evaporation process and its thermal cooling impact on local skin temperature at various time intervals. Traditional experimental methods struggle to fully capture these intricate phenomena. Therefore, numerical simulations play a crucial role in assessing sweat production rates and associated thermal cooling. This research utilizes transient computational fluid dynamics (CFD) to enhance our understanding of the evaporative cooling process on human skin. We conducted a simulation employing the k-w SST turbulence model. This simulation includes a scenario where sweat evaporation occurs over the skin surface, and at particular time intervals, temperatures at different locations have been observed and its effect explained. During this study, sweat evaporation was monitored on the skin surface following the commencement of the simulation. Subsequent to the simulation, various observations were made regarding temperature fluctuations at specific points over time intervals. It was noted that points situated closer to the periphery of the droplets exhibited higher levels of heat transfer and lower temperatures, whereas points within the droplets displayed contrasting trends.

Keywords: CFD, sweat, evaporation, multiphase flow, local heat loss

Procedia PDF Downloads 32
667 Study on the Pavement Structural Performance of Highways in the North China Region Based on Pavement Distress and Ground Penetrating Radar

Authors: Mingwei Yi, Liujie Guo, Zongjun Pan, Xiang Lin, Xiaoming Yi

Abstract:

With the rapid expansion of road construction mileage in China, the scale of road maintenance needs has concurrently escalated. As the service life of roads extends, the design of pavement repair and maintenance emerges as a crucial component in preserving the excellent performance of the pavement. The remaining service life of asphalt pavement structure is a vital parameter in the lifecycle maintenance design of asphalt pavements. Based on an analysis of pavement structural integrity, this study introduces a characterization and assessment of the remaining life of existing asphalt pavement structures. It proposes indicators such as the transverse crack spacing and the length of longitudinal cracks. The transverse crack spacing decreases with an increase in maintenance intervals and with the extended use of semi-rigid base layer structures, although this trend becomes less pronounced after maintenance intervals exceed 4 years. The length of longitudinal cracks increases with longer maintenance intervals, but this trend weakens after five years. This system can support the enhancement of standardization and scientific design in highway maintenance decision-making processes.

Keywords: structural integrity, highways, pavement evaluation, asphalt concrete pavement

Procedia PDF Downloads 32
666 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 136
665 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm

Authors: Sukhleen Kaur

Abstract:

In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.

Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper

Procedia PDF Downloads 392
664 Effects of Irrigation Scheduling and Soil Management on Maize (Zea mays L.) Yield in Guinea Savannah Zone of Nigeria

Authors: I. Alhassan, A. M. Saddiq, A. G. Gashua, K. K. Gwio-Kura

Abstract:

The main objective of any irrigation program is the development of an efficient water management system to sustain crop growth and development and avoid physiological water stress in the growing plants. Field experiment to evaluate the effects of some soil moisture conservation practices on yield and water use efficiency (WUE) of maize was carried out in three locations (i.e. Mubi and Yola in the northern Guinea Savannah and Ganye in the southern Guinea Savannah of Adamawa State, Nigeria) during the dry seasons of 2013 and 2014. The experiment consisted of three different irrigation levels (7, 10 and 12 day irrigation intervals), two levels of mulch (mulch and un-mulched) and two tillage practices (no tillage and minimum tillage) arranged in a randomized complete block design with split-split plot arrangement and replicated three times. The Blaney-Criddle method was used for measuring crop evapotranspiration. The results indicated that seven-day irrigation intervals and mulched treatment were found to have significant effect (P>0.05) on grain yield and water use efficiency in all the locations. The main effect of tillage was non-significant (P<0.05) on grain yield and WUE. The interaction effects of irrigation and mulch were significant (P>0.05) on grain yield and WUE at Mubi and Yola. Generally, higher grain yield and WUE were recorded on mulched and seven-day irrigation intervals, whereas lower values were recorded on un-mulched with 12-day irrigation intervals. Tillage exerts little influence on the yield and WUE. Results from Ganye were found to be generally higher than those recorded in Mubi and Yola; it also showed that an irrigation interval of 10 days with mulching could be adopted for the Ganye area, while seven days interval is more appropriate for Mubi and Yola.

Keywords: irrigation, maize, mulching, tillage, savanna

Procedia PDF Downloads 185
663 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms

Authors: Rikson Gultom

Abstract:

Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.

Keywords: abusive language, hate speech, machine learning, optimization, social media

Procedia PDF Downloads 102