Search results for: random graph
2135 Stock Price Prediction with 'Earnings' Conference Call Sentiment
Authors: Sungzoon Cho, Hye Jin Lee, Sungwhan Jeon, Dongyoung Min, Sungwon Lyu
Abstract:
Major public corporations worldwide use conference calls to report their quarterly earnings. These 'earnings' conference calls allow for questions from stock analysts. We investigated if it is possible to identify sentiment from the call script and use it to predict stock price movement. We analyzed call scripts from six companies, two each from Korea, China and Indonesia during six years 2011Q1 – 2017Q2. Random forest with Frequency-based sentiment scores using Loughran MacDonald Dictionary did better than control model with only financial indicators. When the stock prices went up 20 days from earnings release, our model predicted correctly 77% of time. When the model predicted 'up,' actual stock prices went up 65% of time. This preliminary result encourages us to investigate advanced sentiment scoring methodologies such as topic modeling, auto-encoder, and word2vec variants.Keywords: earnings call script, random forest, sentiment analysis, stock price prediction
Procedia PDF Downloads 2932134 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 2772133 Classification for Obstructive Sleep Apnea Syndrome Based on Random Forest
Authors: Cheng-Yu Tsai, Wen-Te Liu, Shin-Mei Hsu, Yin-Tzu Lin, Chi Wu
Abstract:
Background: Obstructive Sleep apnea syndrome (OSAS) is a common respiratory disorder during sleep. In addition, Body parameters were identified high predictive importance for OSAS severity. However, the effects of body parameters on OSAS severity remain unclear. Objective: In this study, the objective is to establish a prediction model for OSAS by using body parameters and investigate the effects of body parameters in OSAS. Methodologies: Severity was quantified as the polysomnography and the mean hourly number of greater than 3% dips in oxygen saturation during examination in a hospital in New Taipei City (Taiwan). Four levels of OSAS severity were classified by the apnea and hypopnea index (AHI) with American Academy of Sleep Medicine (AASM) guideline. Body parameters, including neck circumference, waist size, and body mass index (BMI) were obtained from questionnaire. Next, dividing the collecting subjects into two groups: training and testing groups. The training group was used to establish the random forest (RF) to predicting, and test group was used to evaluated the accuracy of classification. Results: There were 3330 subjects recruited in this study, whom had been done polysomnography for evaluating severity for OSAS. A RF of 1000 trees achieved correctly classified 79.94 % of test cases. When further evaluated on the test cohort, RF showed the waist and BMI as the high import factors in OSAS. Conclusion It is possible to provide patient with prescreening by body parameters which can pre-evaluate the health risks.Keywords: apnea and hypopnea index, Body parameters, obstructive sleep apnea syndrome, Random Forest
Procedia PDF Downloads 1542132 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 772131 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data
Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang
Abstract:
Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.Keywords: biomarker, congenital heart defects, DNA methylation, random forest
Procedia PDF Downloads 1592130 Nonlinear Vibration of FGM Plates Subjected to Acoustic Load in Thermal Environment Using Finite Element Modal Reduction Method
Authors: Hassan Parandvar, Mehrdad Farid
Abstract:
In this paper, a finite element modeling is presented for large amplitude vibration of functionally graded material (FGM) plates subjected to combined random pressure and thermal load. The material properties of the plates are assumed to vary continuously in the thickness direction by a simple power law distribution in terms of the volume fractions of the constituents. The material properties depend on the temperature whose distribution along the thickness can be expressed explicitly. The von Karman large deflection strain displacement and extended Hamilton's principle are used to obtain the governing system of equations of motion in structural node degrees of freedom (DOF) using finite element method. Three-node triangular Mindlin plate element with shear correction factor is used. The nonlinear equations of motion in structural degrees of freedom are reduced by using modal reduction method. The reduced equations of motion are solved numerically by 4th order Runge-Kutta scheme. In this study, the random pressure is generated using Monte Carlo method. The modeling is verified and the nonlinear dynamic response of FGM plates is studied for various values of volume fraction and sound pressure level under different thermal loads. Snap-through type behavior of FGM plates is studied too.Keywords: nonlinear vibration, finite element method, functionally graded material (FGM) plates, snap-through, random vibration, thermal effect
Procedia PDF Downloads 2632129 Fast and Robust Long-term Tracking with Effective Searching Model
Authors: Thang V. Kieu, Long P. Nguyen
Abstract:
Kernelized Correlation Filter (KCF) based trackers have gained a lot of attention recently because of their accuracy and fast calculation speed. However, this algorithm is not robust in cases where the object is lost by a sudden change of direction, being obscured or going out of view. In order to improve KCF performance in long-term tracking, this paper proposes an anomaly detection method for target loss warning by analyzing the response map of each frame, and a classification algorithm for reliable target re-locating mechanism by using Random fern. Being tested with Visual Tracker Benchmark and Visual Object Tracking datasets, the experimental results indicated that the precision and success rate of the proposed algorithm were 2.92 and 2.61 times higher than that of the original KCF algorithm, respectively. Moreover, the proposed tracker handles occlusion better than many state-of-the-art long-term tracking methods while running at 60 frames per second.Keywords: correlation filter, long-term tracking, random fern, real-time tracking
Procedia PDF Downloads 1392128 Aerodynamic Investigation of Baseline-IV Bird-Inspired BWB Aircraft Design: Improvements over Baseline-III BWB
Authors: C. M. Nur Syazwani, M. K. Ahmad Imran, Rizal E. M. Nasir
Abstract:
The study on BWB UV begins in UiTM since 2005 and three designs have been studied and published. The latest designs are Baseline-III and inspired by birds that have features and aerodynamics behaviour of cruising birds without flapping capability. The aircraft featuring planform and configuration are similar to the bird. Baseline-III has major flaws particularly in its low lift-to-drag ratio, stability and issues regarding limited controllability. New design known as Baseline-IV replaces straight, swept wing to delta wing and have a broader tail compares to the Baseline-III’s. The objective of the study is to investigate aerodynamics of Baseline-IV bird-inspired BWB aircraft. This will be achieved by theoretical calculation and wind tunnel experiments. The result shows that both theoretical and wind tunnel experiments of Baseline-IV graph of CL and CD versus alpha are quite similar to each other in term of pattern of graph slopes and values. Baseline-IV has higher lift coefficient values at wide range of angle of attack compares to Baseline-III. Baseline-IV also has higher maximum lift coefficient, higher maximum lift-to-drag and lower parasite drag. It has stable pitch moment versus lift slope but negative moment at zero lift for zero angle-of-attack tail setting. At high angle of attack, Baseline-IV does not have stability reversal as shown in Baseline-III. Baseline-IV is proven to have improvements over Baseline-III in terms of lift, lift-to-drag ratio and pitch moment stability at high angle-of-attack.Keywords: blended wing-body, bird-inspired blended wing-body, aerodynamic, stability
Procedia PDF Downloads 5082127 The Effect of Spatial Variability on Axial Pile Design of Closed Ended Piles in Sand
Authors: Cormac Reale, Luke J. Prendergast, Kenneth Gavin
Abstract:
While significant improvements have been made in axial pile design methods over recent years, the influence of soils natural variability has not been adequately accounted for within them. Soil variability is a crucial parameter to consider as it can account for large variations in pile capacity across the same site. This paper seeks to address this knowledge deficit, by demonstrating how soil spatial variability can be accommodated into existing cone penetration test (CPT) based pile design methods, in the form of layered non-homogeneous random fields. These random fields model the scope of a given property’s variance and define how it varies spatially. A Monte Carlo analysis of the pile will be performed taking into account parameter uncertainty and spatial variability, described using the measured scales of fluctuation. The results will be discussed in light of Eurocode 7 and the effect of spatial averaging on design capacities will be analysed.Keywords: pile axial design, reliability, spatial variability, CPT
Procedia PDF Downloads 2462126 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 2772125 Voxel Models as Input for Heat Transfer Simulations with Siemens NX Based on X-Ray Microtomography Images of Random Fibre Reinforced Composites
Authors: Steven Latré, Frederik Desplentere, Ilya Straumit, Stepan V. Lomov
Abstract:
A method is proposed in order to create a three-dimensional finite element model representing fibre reinforced insulation materials for the simulation software Siemens NX. VoxTex software, a tool for quantification of µCT images of fibrous materials, is used for the transformation of microtomography images of random fibre reinforced composites into finite element models. An automatic tool was developed to execute the import of the models to the thermal solver module of Siemens NX. The paper describes the numerical tools used for the image quantification and the transformation and illustrates them on several thermal simulations of fibre reinforced insulation blankets filled with low thermal conductive fillers. The calculation of thermal conductivity is validated by comparison with the experimental data.Keywords: analysis, modelling, thermal, voxel
Procedia PDF Downloads 2872124 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1912123 Reduced Power Consumption by Randomization for DSI3
Authors: David Levy
Abstract:
The newly released Distributed System Interface 3 (DSI3) Bus Standard specification defines 3 modulation levels from which 16 valid symbols are coded. This structure creates power consumption variations depending on the transmitted data of a factor of more than 2 between minimum and maximum. The power generation unit has to consider therefore the worst case maximum consumption all the time and be built accordingly. This paper proposes a method to reduce both the average current consumption and worst case current consumption. The transmitter randomizes the data using several pseudo-random sequences. It then estimates the energy consumption of the generated frames and selects to transmit the one which consumes the least. The transmitter also prepends the index of the pseudo-random sequence, which is not randomized, to allow the receiver to recover the original data using the correct sequence. We show that in the case that the frame occupies most of the DSI3 synchronization period, we achieve average power consumption reduction by up to 13% and the worst case power consumption is reduced by 17.7%.Keywords: DSI3, energy, power consumption, randomization
Procedia PDF Downloads 5382122 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images
Authors: Mehrnoosh Omati, Mahmod Reza Sahebi
Abstract:
The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images
Procedia PDF Downloads 2192121 Structural Reliability Analysis Using Extreme Learning Machine
Authors: Mehul Srivastava, Sharma Tushar Ravikant, Mridul Krishn Mishra
Abstract:
In structural design, the evaluation of safety and probability failure of structure is of significant importance, mainly when the variables are random. On real structures, structural reliability can be evaluated obtaining an implicit limit state function. The structural reliability limit state function is obtained depending upon the statistically independent variables. In the analysis of reliability, we considered the statistically independent random variables to be the load intensity applied and the depth or height of the beam member considered. There are many approaches for structural reliability problems. In this paper Extreme Learning Machine technique and First Order Second Moment Method is used to determine the reliability indices for the same set of variables. The reliability index obtained using ELM is compared with the reliability index obtained using FOSM. Higher the reliability index, more feasible is the method to determine the reliability.Keywords: reliability, reliability index, statistically independent, extreme learning machine
Procedia PDF Downloads 6842120 Breast Cancer Detection Using Machine Learning Algorithms
Authors: Jiwan Kumar, Pooja, Sandeep Negi, Anjum Rouf, Amit Kumar, Naveen Lakra
Abstract:
In modern times where, health issues are increasing day by day, breast cancer is also one of them, which is very crucial and really important to find in the early stages. Doctors can use this model in order to tell their patients whether a cancer is not harmful (benign) or harmful (malignant). We have used the knowledge of machine learning in order to produce the model. we have used algorithms like Logistic Regression, Random forest, support Vector Classifier, Bayesian Network and Radial Basis Function. We tried to use the data of crucial parts and show them the results in pictures in order to make it easier for doctors. By doing this, we're making ML better at finding breast cancer, which can lead to saving more lives and better health care.Keywords: Bayesian network, radial basis function, ensemble learning, understandable, data making better, random forest, logistic regression, breast cancer
Procedia PDF Downloads 542119 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization
Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın
Abstract:
There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.Keywords: aircraft, fatigue, joint, life, optimization, prediction.
Procedia PDF Downloads 1782118 Performance and Emission Prediction in a Biodiesel Engine Fuelled with Honge Methyl Ester Using RBF Neural Networks
Authors: Shiva Kumar, G. S. Vijay, Srinivas Pai P., Shrinivasa Rao B. R.
Abstract:
In the present study RBF neural networks were used for predicting the performance and emission parameters of a biodiesel engine. Engine experiments were carried out in a 4 stroke diesel engine using blends of diesel and Honge methyl ester as the fuel. Performance parameters like BTE, BSEC, Tech and emissions from the engine were measured. These experimental results were used for ANN modeling. RBF center initialization was done by random selection and by using Clustered techniques. Network was trained by using fixed and varying widths for the RBF units. It was observed that RBF results were having a good agreement with the experimental results. Networks trained by using clustering technique gave better results than using random selection of centers in terms of reduced MRE and increased prediction accuracy. The average MRE for the performance parameters was 3.25% with the prediction accuracy of 98% and for emissions it was 10.4% with a prediction accuracy of 80%.Keywords: radial basis function networks, emissions, performance parameters, fuzzy c means
Procedia PDF Downloads 5602117 Bi-Criteria Objective Network Design Model for Multi Period Multi Product Green Supply Chain
Authors: Shahul Hamid Khan, S. Santhosh, Abhinav Kumar Sharma
Abstract:
Environmental performance along with social performance is becoming vital factors for industries to achieve global standards. With a good environmental policy global industries are differentiating them from their competitors. This paper concentrates on multi stage, multi product and multi period manufacturing network. Bi-objective mathematical models for total cost and total emission for the entire forward supply chain are considered. Here five different problems are considered by varying the number of suppliers, manufacturers, and environmental levels, for illustrating the taken mathematical model. GA, and Random search are used for finding the optimal solution. The input parameters of the optimal solution are used to find the tradeoff between the initial investment by the industry and the long term benefit of the environment.Keywords: closed loop supply chain, genetic algorithm, random search, green supply chain
Procedia PDF Downloads 5492116 Heart Ailment Prediction Using Machine Learning Methods
Authors: Abhigyan Hedau, Priya Shelke, Riddhi Mirajkar, Shreyash Chaple, Mrunali Gadekar, Himanshu Akula
Abstract:
The heart is the coordinating centre of the major endocrine glandular structure of the body, which produces hormones that profoundly affect the operations of the body, and diagnosing cardiovascular disease is a difficult but critical task. By extracting knowledge and information about the disease from patient data, data mining is a more practical technique to help doctors detect disorders. We use a variety of machine learning methods here, including logistic regression and support vector classifiers (SVC), K-nearest neighbours Classifiers (KNN), Decision Tree Classifiers, Random Forest classifiers and Gradient Boosting classifiers. These algorithms are applied to patient data containing 13 different factors to build a system that predicts heart disease in less time with more accuracy.Keywords: logistic regression, support vector classifier, k-nearest neighbour, decision tree, random forest and gradient boosting
Procedia PDF Downloads 522115 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 422114 Kinetic Study of Municipal Plastic Waste
Authors: Laura Salvia Diaz Silvarrey, Anh Phan
Abstract:
Municipal Plastic Waste (MPW) comprises a mixture of thermoplastics such as high and low density polyethylene (HDPE and LDPE), polypropylene (PP), polystyrene (PS) and polyethylene terephthalate (PET). Recycling rate of these plastics is low, e.g. only 27% in 2013. The remains were incinerated or disposed in landfills. As MPW generation increases approximately 5% per annum, MPW management technologies have to be developed to comply with legislation . Pyrolysis, thermochemical decomposition, provides an excellent alternative to convert MPW into valuable resources like fuels and chemicals. Most studies on waste plastic kinetics only focused on HDPE and LDPE with a simple assumption of first order decomposition, which is not the real reaction mechanism. The aim of this study was to develop a kinetic study for each of the polymers in the MPW mixture using thermogravimetric analysis (TGA) over a range of heating rates (5, 10, 20 and 40°C/min) in N2 atmosphere and sample size of 1 – 4mm. A model-free kinetic method was applied to quantify the activation energy at each level of conversion. Kissinger–Akahira–Sunose (KAS) and Flynn–Wall–Ozawa (FWO) equations jointly with Master Plots confirmed that the activation energy was not constant along all the reaction for all the five plastic studied, showing that MPW decomposed through a complex mechanism and not by first-order kinetics. Master plots confirmed that MPW decomposed following a random scission mechanism at conversions above 40%. According to the random scission mechanism, different radicals are formed along the backbone producing the cleavage of bonds by chain scission into molecules of different lengths. The cleavage of bonds during random scission follows first-order kinetics and it is related with the conversion. When a bond is broken one part of the initial molecule becomes an unsaturated one and the other a terminal free radical. The latter can react with hydrogen from and adjacent carbon releasing another free radical and a saturated molecule or reacting with another free radical and forming an alkane. Not every time a bonds is broken a molecule is evaporated. At early stages of the reaction (conversion and temperature below 40% and 300°C), most products are not short enough to evaporate. Only at higher degrees of conversion most of cleavage of bonds releases molecules small enough to evaporate.Keywords: kinetic, municipal plastic waste, pyrolysis, random scission
Procedia PDF Downloads 3542113 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete
Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml
Abstract:
Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic
Procedia PDF Downloads 1582112 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 4142111 On Four Models of a Three Server Queue with Optional Server Vacations
Authors: Kailash C. Madan
Abstract:
We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.Keywords: a three server queue, Bernoulli schedule server vacations, queue size distribution at a random epoch, steady state
Procedia PDF Downloads 2962110 Regression Model Evaluation on Depth Camera Data for Gaze Estimation
Authors: James Purnama, Riri Fitri Sari
Abstract:
We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python
Procedia PDF Downloads 5382109 Characterization of the GntR Family Transcriptional Regulator Rv0792c: A Potential Drug Target for Mycobacterium tuberculosis
Authors: Thanusha D. Abeywickrama, Inoka C. Perera, Genji Kurisu
Abstract:
Tuberculosis, considered being as the ninth leading cause of death worldwide, cause from a single infectious agent M. tuberculosis and the drug resistance nature of this bacterium is a continuing threat to the world. Therefore TB preventing treatment is expanding, where this study designed to analyze the regulatory mechanism of GntR transcriptional regulator gene Rv0792c, which lie between several genes codes for some hypothetical proteins, a monooxygenase and an oxidoreductase. The gene encoding Rv0792c was cloned into pET28a and expressed protein was purified to near homogeneity by Nickel affinity chromatography. It was previously reported that the protein binds within the intergenic region (BS region) between Rv0792c gene and monooxygenase (Rv0793). This resulted in binding of three protein molecules with the BS region suggesting tight control of monooxygenase as well as its own gene. Since monooxygenase plays a key role in metabolism, this gene may have a global regulatory role. The natural ligand for this regulator is still under investigation. In relation to the Rv0792 protein structure, a Circular Dichroism (CD) spectrum was carried out to determine its secondary structure elements. Percentage-wise, 17.4% Helix, 21.8% Antiparallel, 5.1% Parallel, 12.3% turn and 43.5% other were revealed from CD spectrum data under room temperature. Differential Scanning Calorimetry (DSC) was conducted to assess the thermal stability of Rv0792, which the melting temperature of protein is 57.2 ± 0.6 °C. The graph of heat capacity (Cp) versus temperature for the best fit was obtained for non-two-state model, which concludes the folding of Rv0792 protein occurs through stable intermediates. Peak area (∆HCal ) and Peak shape (∆HVant ) was calculated from the graph and ∆HCal / ∆HVant was close to 0.5, suggesting dimeric nature of the protein.Keywords: CD spectrum, DSC analysis, GntR transcriptional regulator, protein structure
Procedia PDF Downloads 2232108 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking
Authors: Zayar Phyo, Ei Chaw Htoon
Abstract:
In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel
Procedia PDF Downloads 1512107 Building a Stochastic Simulation Model for Blue Crab Population Evolution in Antinioti Lagoon
Authors: Nikolaos Simantiris, Markos Avlonitis
Abstract:
This work builds a simulation platform, modeling the spatial diffusion of the invasive species Callinectes sapidus (blue crab) as a random walk, incorporating also generation, fatality, and fishing rates modeling the time evolution of its population. Antinioti lagoon in West Greece was used as a testbed for applying the simulation model. Field measurements from June 2020 to June 2021 on the lagoon’s setting, bathymetry, and blue crab juveniles provided the initial population simulation of blue crabs, as well as biological parameters from the current literature were used to calibrate simulation parameters. The scope of this study is to render the authors able to predict the evolution of the blue crab population in confined environments of the Ionian Islands region in West Greece. The first result of the simulation experiments shows the possibility for a robust prediction for blue crab population evolution in the Antinioti lagoon.Keywords: antinioti lagoon, blue crab, stochastic simulation, random walk
Procedia PDF Downloads 2302106 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 272