Search results for: random LSB
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2081

Search results for: random LSB

1961 Real-Time Path Planning for Unmanned Air Vehicles Using Improved Rapidly-Exploring Random Tree and Iterative Trajectory Optimization

Authors: A. Ramalho, L. Romeiro, R. Ventura, A. Suleman

Abstract:

A real-time path planning framework for Unmanned Air Vehicles, and in particular multi-rotors is proposed. The framework is designed to provide feasible trajectories from the current UAV position to a goal state, taking into account constraints such as obstacle avoidance, problem kinematics, and vehicle limitations such as maximum speed and maximum acceleration. The framework computes feasible paths online, allowing to avoid new, unknown, dynamic obstacles without fully re-computing the trajectory. These features are achieved using an iterative process in which the robot computes and optimizes the trajectory while performing the mission objectives. A first trajectory is computed using a modified Rapidly-Exploring Random Tree (RRT) algorithm, that provides trajectories that respect a maximum curvature constraint. The trajectory optimization is accomplished using the Interior Point Optimizer (IPOPT) as a solver. The framework has proven to be able to compute a trajectory and optimize to a locally optimal with computational efficiency making it feasible for real-time operations.

Keywords: interior point optimization, multi-rotors, online path planning, rapidly exploring random trees, trajectory optimization

Procedia PDF Downloads 134
1960 Loan Repayment Prediction Using Machine Learning: Model Development, Django Web Integration and Cloud Deployment

Authors: Seun Mayowa Sunday

Abstract:

Loan prediction is one of the most significant and recognised fields of research in the banking, insurance, and the financial security industries. Some prediction systems on the market include the construction of static software. However, due to the fact that static software only operates with strictly regulated rules, they cannot aid customers beyond these limitations. Application of many machine learning (ML) techniques are required for loan prediction. Four separate machine learning models, random forest (RF), decision tree (DT), k-nearest neighbour (KNN), and logistic regression, are used to create the loan prediction model. Using the anaconda navigator and the required machine learning (ML) libraries, models are created and evaluated using the appropriate measuring metrics. From the finding, the random forest performs with the highest accuracy of 80.17% which was later implemented into the Django framework. For real-time testing, the web application is deployed on the Alibabacloud which is among the top 4 biggest cloud computing provider. Hence, to the best of our knowledge, this research will serve as the first academic paper which combines the model development and the Django framework, with the deployment into the Alibaba cloud computing application.

Keywords: k-nearest neighbor, random forest, logistic regression, decision tree, django, cloud computing, alibaba cloud

Procedia PDF Downloads 132
1959 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Human action recognition modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view football datasets. Our HMR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH multi-view football datasets, respectively.

Keywords: computer vision, human motion analysis, random forest, machine learning

Procedia PDF Downloads 35
1958 Similarities and Differences in Values of Young Women and Their Parents: The Effect of Value Transmission and Value Change

Authors: J. Fryt, K. Pietras, T. Smolen

Abstract:

Intergenerational similarities in values may be effect of value transmission within families or socio-cultural trends prevailing at a specific point in time. According to salience hypothesis, salient family values may be transmitted more frequently. On the other hand, many value studies reveal that generational shift from social values (conservation and self-transcendence) to more individualistic values (openness to change and self-enhancement) suggest that value transmission and value change are two different processes. The first aim of our study was to describe similarities and differences in values of young women and their parents. The second aim was to determine which value similarities may be due to transmission within families. Ninety seven Polish women aged 19-25 and both their mothers and fathers filled in the Portrait Value Questionaire. Intergenerational similarities in values between women were found in strong preference for benevolence, universalism and self-direction as well as low preference for power. Similarities between younger women and older men were found in strong preference for universalism and hedonism as well as lower preference for security and tradition. Young women differed from older generation in strong preference for stimulation and achievement as well as low preference for conformity. To identify the origin of intergenerational similarities (whether they are the effect of value transmission within families or not), we used the comparison between correlations of values in family dyads (mother-daughter, father-daughter) and distribution of correlations in random intergenerational dyads (random mother-daughter, random father-daughter) as well as peer dyads (random daughter-daughter). Values representing conservation (security, tradition and conformity) as well as benevolence and power were transmitted in families between women. Achievement, power and security were transmitted between fathers and daughters. Similarities in openness to change (self-direction, stimulation and hedonism) and universalism were not stronger within families than in random intergenerational and peer dyads. Taken together, our findings suggest that despite noticeable generation shift from social to more individualistic values, we can observe transmission of parents’ salient values such as security, tradition, benevolence and achievement.

Keywords: value transmission, value change, intergenerational similarities, differences in values

Procedia PDF Downloads 428
1957 A Study of Classification Models to Predict Drill-Bit Breakage Using Degradation Signals

Authors: Bharatendra Rai

Abstract:

Cutting tools are widely used in manufacturing processes and drilling is the most commonly used machining process. Although drill-bits used in drilling may not be expensive, their breakage can cause damage to expensive work piece being drilled and at the same time has major impact on productivity. Predicting drill-bit breakage, therefore, is important in reducing cost and improving productivity. This study uses twenty features extracted from two degradation signals viz., thrust force and torque. The methodology used involves developing and comparing decision tree, random forest, and multinomial logistic regression models for classifying and predicting drill-bit breakage using degradation signals.

Keywords: degradation signal, drill-bit breakage, random forest, multinomial logistic regression

Procedia PDF Downloads 350
1956 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest

Procedia PDF Downloads 119
1955 Different Sampling Schemes for Semi-Parametric Frailty Model

Authors: Nursel Koyuncu, Nihal Ata Tutkun

Abstract:

Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.

Keywords: frailty model, ranked set sampling, efficiency, simple random sampling

Procedia PDF Downloads 209
1954 Integrating Process Planning, WMS Dispatching, and WPPW Weighted Due Date Assignment Using a Genetic Algorithm

Authors: Halil Ibrahim Demir, Tarık Cakar, Ibrahim Cil, Muharrem Dugenci, Caner Erden

Abstract:

Conventionally, process planning, scheduling, and due-date assignment functions are performed separately and sequentially. The interdependence of these functions requires integration. Although integrated process planning and scheduling, and scheduling with due date assignment problems are popular research topics, only a few works address the integration of these three functions. This work focuses on the integration of process planning, WMS scheduling, and WPPW due date assignment. Another novelty of this work is the use of a weighted due date assignment. In the literature, due dates are generally assigned without considering the importance of customers. However, in this study, more important customers get closer due dates. Typically, only tardiness is punished, but the JIT philosophy punishes both earliness and tardiness. In this study, all weighted earliness, tardiness, and due date related costs are penalized. As no customer desires distant due dates, such distant due dates should be penalized. In this study, various levels of integration of these three functions are tested and genetic search and random search are compared both with each other and with ordinary solutions. Higher integration levels are superior, while search is always useful. Genetic searches outperformed random searches.

Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic algorithm, random search

Procedia PDF Downloads 392
1953 Fusion Models for Cyber Threat Defense: Integrating Clustering, Random Forests, and Support Vector Machines to Against Windows Malware

Authors: Azita Ramezani, Atousa Ramezani

Abstract:

In the ever-escalating landscape of windows malware the necessity for pioneering defense strategies turns into undeniable this study introduces an avant-garde approach fusing the capabilities of clustering random forests and support vector machines SVM to combat the intricate web of cyber threats our fusion model triumphs with a staggering accuracy of 98.67 and an equally formidable f1 score of 98.68 a testament to its effectiveness in the realm of windows malware defense by deciphering the intricate patterns within malicious code our model not only raises the bar for detection precision but also redefines the paradigm of cybersecurity preparedness this breakthrough underscores the potential embedded in the fusion of diverse analytical methodologies and signals a paradigm shift in fortifying against the relentless evolution of windows malicious threats as we traverse through the dynamic cybersecurity terrain this research serves as a beacon illuminating the path toward a resilient future where innovative fusion models stand at the forefront of cyber threat defense.

Keywords: fusion models, cyber threat defense, windows malware, clustering, random forests, support vector machines (SVM), accuracy, f1-score, cybersecurity, malicious code detection

Procedia PDF Downloads 70
1952 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 335
1951 Application of Fuzzy Logic to Design and Coordinate Parallel Behaviors for a Humanoid Mobile Robot

Authors: Nguyen Chan Hung, Mai Ngoc Anh, Nguyen Xuan Ha, Tran Xuan Duc, Dang Bao Lam, Nguyen Hoang Viet

Abstract:

This paper presents a design and implementation of a navigation controller for a humanoid mobile robot platform to operate in indoor office environments. In order to fulfil the requirement of recognizing and approaching human to provide service while avoiding random obstacles, a behavior-based fuzzy logic controller was designed to simultaneously coordinate multiple behaviors. Experiments in real office environment showed that the fuzzy controller deals well with complex scenarios without colliding with random objects and human.

Keywords: behavior control, fuzzy logic, humanoid robot, mobile robot

Procedia PDF Downloads 419
1950 Investigation of Different Stimulation Patterns to Reduce Muscle Fatigue during Functional Electrical Stimulation

Authors: R. Ruslee, H. Gollee

Abstract:

Functional electrical stimulation (FES) is a commonly used technique in rehabilitation and often associated with rapid muscle fatigue which becomes the limiting factor in its applications. The objective of this study is to investigate the effects on the onset of fatigue of conventional synchronous stimulation, as well as asynchronous stimulation that mimic voluntary muscle activation targeting different motor units which are activated sequentially or randomly via multiple pairs of stimulation electrodes. We investigate three different approaches with various electrode configurations, as well as different patterns of stimulation applied to the gastrocnemius muscle: Conventional Synchronous Stimulation (CSS), Asynchronous Sequential Stimulation (ASS) and Asynchronous Random Stimulation (ARS). Stimulation was applied repeatedly for 300 ms followed by 700 ms of no-stimulation with 40 Hz effective frequency for all protocols. Ten able-bodied volunteers (28±3 years old) participated in this study. As fatigue indicators, we focused on the analysis of Normalized Fatigue Index (NFI), Fatigue Time Interval (FTI) and pre-post Twitch-Tetanus Ratio (ΔTTR). The results demonstrated that ASS and ARS give higher NFI and longer FTI confirming less fatigue for asynchronous stimulation. In addition, ASS and ARS resulted in higher ΔTTR than conventional CSS. In this study, we proposed a randomly distributed stimulation method for the application of FES and investigated its suitability for reducing muscle fatigue compared to previously applied methods. The results validated that asynchronous stimulation reduces fatigue, and indicates that random stimulation may improve fatigue resistance in some conditions.

Keywords: asynchronous stimulation, electrode configuration, functional electrical stimulation (FES), muscle fatigue, pattern stimulation, random stimulation, sequential stimulation, synchronous stimulation

Procedia PDF Downloads 306
1949 Hyperchaos-Based Video Encryption for Device-To-Device Communications

Authors: Samir Benzegane, Said Sadoudi, Mustapha Djeddou

Abstract:

In this paper, we present a software development of video streaming encryption for Device-to-Device (D2D) communications by using Hyperchaos-based Random Number Generator (HRNG) implemented in C#. The software implements and uses the proposed HRNG to generate key stream for encrypting and decrypting real-time video data. The used HRNG consists of Hyperchaos Lorenz system which produces four signal outputs taken as encryption keys. The generated keys are characterized by high quality randomness which is confirmed by passing standard NIST statistical tests. Security analysis of the proposed encryption scheme confirms its robustness against different attacks.

Keywords: hyperchaos Lorenz system, hyperchaos-based random number generator, D2D communications, C#

Procedia PDF Downloads 369
1948 Comparative Study od Three Artificial Intelligence Techniques for Rain Domain in Precipitation Forecast

Authors: Nabilah Filzah Mohd Radzuan, Andi Putra, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Precipitation forecast is important to avoid natural disaster incident which can cause losses in the involved area. This paper reviews three techniques logistic regression, decision tree, and random forest which are used in making precipitation forecast. These combination techniques through the vector auto-regression (VAR) model help in finding the advantages and strengths of each technique in the forecast process. The data-set contains variables of the rain’s domain. Adaptation of artificial intelligence techniques involved in rain domain enables the forecast process to be easier and systematic for precipitation forecast.

Keywords: logistic regression, decisions tree, random forest, VAR model

Procedia PDF Downloads 444
1947 Dynamic Response Analysis of Structure with Random Parameters

Authors: Ahmed Guerine, Ali El Hafidi, Bruno Martin, Philippe Leclaire

Abstract:

In this paper, we propose a method for the dynamic response of multi-storey structures with uncertain-but-bounded parameters. The effectiveness of the proposed method is demonstrated by a numerical example of three-storey structures. This equation is integrated numerically using Newmark’s method. The numerical results are obtained by the proposed method. The simulation accounting the interval analysis method results are compared with a probabilistic approach results. The interval analysis method provides a mean curve that is between an upper and lower bound obtained from the probabilistic approach.

Keywords: multi-storey structure, dynamic response, interval analysis method, random parameters

Procedia PDF Downloads 188
1946 Comparison Study of Machine Learning Classifiers for Speech Emotion Recognition

Authors: Aishwarya Ravindra Fursule, Shruti Kshirsagar

Abstract:

In the intersection of artificial intelligence and human-centered computing, this paper delves into speech emotion recognition (SER). It presents a comparative analysis of machine learning models such as K-Nearest Neighbors (KNN),logistic regression, support vector machines (SVM), decision trees, ensemble classifiers, and random forests, applied to SER. The research employs four datasets: Crema D, SAVEE, TESS, and RAVDESS. It focuses on extracting salient audio signal features like Zero Crossing Rate (ZCR), Chroma_stft, Mel Frequency Cepstral Coefficients (MFCC), root mean square (RMS) value, and MelSpectogram. These features are used to train and evaluate the models’ ability to recognize eight types of emotions from speech: happy, sad, neutral, angry, calm, disgust, fear, and surprise. Among the models, the Random Forest algorithm demonstrated superior performance, achieving approximately 79% accuracy. This suggests its suitability for SER within the parameters of this study. The research contributes to SER by showcasing the effectiveness of various machine learning algorithms and feature extraction techniques. The findings hold promise for the development of more precise emotion recognition systems in the future. This abstract provides a succinct overview of the paper’s content, methods, and results.

Keywords: comparison, ML classifiers, KNN, decision tree, SVM, random forest, logistic regression, ensemble classifiers

Procedia PDF Downloads 42
1945 The Impact of Access to Microcredit Programme on Women Empowerment: A Case Study of Cowries Microfinance Bank in Lagos State, Nigeria

Authors: Adijat Olubukola Olateju

Abstract:

Women empowerment is an essential developmental tool in every economy especially in less developed countries; as it helps to enhance women's socio-economic well-being. Some empirical evidence has shown that microcredit has been an effective tool in enhancing women empowerment, especially in developing countries. This paper therefore, investigates the impact of microcredit programme on women empowerment in Lagos State, Nigeria. The study used Cowries Microfinance Bank (CMB) as a case study bank, and a total of 359 women entrepreneurs were selected by simple random sampling technique from the list of Cowries Microfinance Bank. Selection bias which could arise from non-random selection of participants or non-random placement of programme, was adjusted for by dividing the data into participant women entrepreneurs and non-participant women entrepreneurs. The data were analyzed with a Propensity Score Matching (PSM) technique. The result of the Average Treatment Effect on the Treated (ATT) obtained from the PSM indicates that the credit programme has a significant effect on the empowerment of women in the study area. It is therefore, recommended that microfinance banks should be encouraged to give loan to women and for more impact of the loan to be felt by the beneficiaries the loan programme should be complemented with other programmes such as training, grant, and periodic monitoring of programme should be encouraged.

Keywords: empowerment, microcredit, socio-economic wellbeing, development

Procedia PDF Downloads 302
1944 Study of Transport Phenomena in Photonic Crystals with Correlated Disorder

Authors: Samira Cherid, Samir Bentata, Feyza Zahira Meghoufel, Yamina Sefir, Sabria Terkhi, Fatima Bendahma, Bouabdellah Bouadjemi, Ali Zitouni

Abstract:

Using the transfer-matrix technique and the Kronig Penney model, we numerically and analytically investigate the effect of short-range correlated disorder in random dimer model (RDM) on transmission properties of light in one dimension photonic crystals made of three different materials. Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that one kind of these layers appears in pairs. It is shown that the one-dimensional random dimer photonic crystals support two types of extended modes. By shifting of the dimer resonance toward the host fundamental stationary resonance state, we demonstrate the existence of the ballistic response in these systems.

Keywords: photonic crystals, disorder, correlation, transmission

Procedia PDF Downloads 476
1943 A Quantitative Structure-Adsorption Study on Novel and Emerging Adsorbent Materials

Authors: Marc Sader, Michiel Stock, Bernard De Baets

Abstract:

Considering a large amount of adsorption data of adsorbate gases on adsorbent materials in literature, it is interesting to predict such adsorption data without experimentation. A quantitative structure-activity relationship (QSAR) is developed to correlate molecular characteristics of gases and existing knowledge of materials with their respective adsorption properties. The application of Random Forest, a machine learning method, on a set of adsorption isotherms at a wide range of partial pressures and concentrations is studied. The predicted adsorption isotherms are fitted to several adsorption equations to estimate the adsorption properties. To impute the adsorption properties of desired gases on desired materials, leave-one-out cross-validation is employed. Extensive experimental results for a range of settings are reported.

Keywords: adsorption, predictive modeling, QSAR, random forest

Procedia PDF Downloads 225
1942 Solving Process Planning, Weighted Earliest Due Date Scheduling and Weighted Due Date Assignment Using Simulated Annealing and Evolutionary Strategies

Authors: Halil Ibrahim Demir, Abdullah Hulusi Kokcam, Fuat Simsir, Özer Uygun

Abstract:

Traditionally, three important manufacturing functions which are process planning, scheduling and due-date assignment are performed sequentially and separately. Although there are numerous works on the integration of process planning and scheduling and plenty of works focusing on scheduling with due date assignment, there are only a few works on integrated process planning, scheduling and due-date assignment. Although due-dates are determined without taking into account of weights of the customers in the literature, here weighted due-date assignment is employed to get better performance. Jobs are scheduled according to weighted earliest due date dispatching rule and due dates are determined according to some popular due date assignment methods by taking into account of the weights of each job. Simulated Annealing, Evolutionary Strategies, Random Search, hybrid of Random Search and Simulated Annealing, and hybrid of Random Search and Evolutionary Strategies, are applied as solution techniques. Three important manufacturing functions are integrated step-by-step and higher integration levels are found better. Search meta-heuristics are found to be very useful while improving performance measure.

Keywords: process planning, weighted scheduling, weighted due-date assignment, simulated annealing, evolutionary strategies, hybrid searches

Procedia PDF Downloads 461
1941 Texture-Based Image Forensics from Video Frame

Authors: Li Zhou, Yanmei Fang

Abstract:

With current technology, images and videos can be obtained more easily than ever. It is so easy to manipulate these digital multimedia information when obtained, and that the content or source of the image and video could be easily tampered. In this paper, we propose to identify the image and video frame by the texture-based approach, e.g. Markov Transition Probability (MTP), which is in space domain, DCT domain and DWT domain, respectively. In the experiment, image and video frame database is constructed, and is used to train and test the classifier Support Vector Machine (SVM). Experiment results show that the texture-based approach has good performance. In order to verify the experiment result, and testify the universality and robustness of algorithm, we build a random testing dataset, the random testing result is in keeping with above experiment.

Keywords: multimedia forensics, video frame, LBP, MTP, SVM

Procedia PDF Downloads 424
1940 Investigating the Efficiency of Stratified Double Median Ranked Set Sample for Estimating the Population Mean

Authors: Mahmoud I. Syam

Abstract:

Stratified double median ranked set sampling (SDMRSS) method is suggested for estimating the population mean. The SDMRSS is compared with the simple random sampling (SRS), stratified simple random sampling (SSRS), and stratified ranked set sampling (SRSS). It is shown that SDMRSS estimator is an unbiased of the population mean and more efficient than SRS, SSRS, and SRSS. Also, by SDMRSS, we can increase the efficiency of mean estimator for specific value of the sample size. SDMRSS is applied on real life examples, and the results of the example agreed the theoretical results.

Keywords: efficiency, double ranked set sampling, median ranked set sampling, ranked set sampling, stratified

Procedia PDF Downloads 246
1939 Leveraging SHAP Values for Effective Feature Selection in Peptide Identification

Authors: Sharon Li, Zhonghang Xia

Abstract:

Post-database search is an essential phase in peptide identification using tandem mass spectrometry (MS/MS) to refine peptide-spectrum matches (PSMs) produced by database search engines. These engines frequently face difficulty differentiating between correct and incorrect peptide assignments. Despite advances in statistical and machine learning methods aimed at improving the accuracy of peptide identification, challenges remain in selecting critical features for these models. In this study, two machine learning models—a random forest tree and a support vector machine—were applied to three datasets to enhance PSMs. SHAP values were utilized to determine the significance of each feature within the models. The experimental results indicate that the random forest model consistently outperformed the SVM across all datasets. Further analysis of SHAP values revealed that the importance of features varies depending on the dataset, indicating that a feature's role in model predictions can differ significantly. This variability in feature selection can lead to substantial differences in model performance, with false discovery rate (FDR) differences exceeding 50% between different feature combinations. Through SHAP value analysis, the most effective feature combinations were identified, significantly enhancing model performance.

Keywords: peptide identification, SHAP value, feature selection, random forest tree, support vector machine

Procedia PDF Downloads 21
1938 Hydrodynamic Characteristics of Single and Twin Offshore Rubble Mound Breakwaters under Regular and Random Waves

Authors: M. Alkhalidi, S. Neelamani, Z. Al-Zaqah

Abstract:

This paper investigates the interaction of single and twin offshore rubble mound breakwaters with regular and random water waves through physical modeling to assess their reflection, transmission and energy dissipation characteristics. Various combinations of wave heights and wave periods were utilized in a series of experiments, along with three different water depths. The single and twin permeable breakwater models were both constructed with one layer of rubbles. Both models had the same total volume; however, the single breakwater was of trapezoidal type while the twin breakwaters were of triangular type. Physical modeling experiments were carried out in the wave flume of the coastal engineering laboratory of Kuwait Institute for Scientific Research (KISR). Measurements of the six wave probes which were fixed in the two-dimensional wave flume were collected and used to determine the generated incident wave heights, as well as the reflected and transmitted wave heights resulting from the wave-breakwater interaction. The possible factors affecting the wave attenuation efficiency of the breakwater models are the relative water depth (d/L), wave steepness (H/L), relative wave height ((h-d)/Hi), relative height of the breakwater (h/d), and relative clear spacing between the twin breakwaters (S/h). The results indicated that the single and double breakwaters show different responds to the change in their relative height as well as the relative wave height which demonstrates that the effect of the relative water depth on wave reflection, transmission, and energy dissipation is highly influenced by the change in the relative breakwater height, the relative wave height and the relative breakwater spacing. In general, within the range of the relative water depth tested in this study, and under both regular and random waves, it is found that the single breakwater allows for lower wave transmission and shows higher energy dissipation effect than both of the tested twin breakwaters, and hence has the best overall performance.

Keywords: random waves, regular waves, relative water depth, relative wave height, single breakwater, twin breakwater, wave steepness

Procedia PDF Downloads 325
1937 Facial Recognition on the Basis of Facial Fragments

Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza

Abstract:

There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.

Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features

Procedia PDF Downloads 359
1936 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme

Authors: Shahram Jamali, Samira Hamed

Abstract:

One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.

Keywords: active queue management, RED, Markov model, random early detection algorithm

Procedia PDF Downloads 539
1935 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials

Authors: Aleš Florian, Lenka Ševelová

Abstract:

Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.

Keywords: concrete, FEM, pavement, sensitivity, simulation

Procedia PDF Downloads 328
1934 Low Cost Inertial Sensors Modeling Using Allan Variance

Authors: A. A. Hussen, I. N. Jleta

Abstract:

Micro-electromechanical system (MEMS) accelerometers and gyroscopes are suitable for the inertial navigation system (INS) of many applications due to the low price, small dimensions and light weight. The main disadvantage in a comparison with classic sensors is a worse long term stability. The estimation accuracy is mostly affected by the time-dependent growth of inertial sensor errors, especially the stochastic errors. In order to eliminate negative effect of these random errors, they must be accurately modeled. Where the key is the successful implementation that depends on how well the noise statistics of the inertial sensors is selected. In this paper, the Allan variance technique will be used in modeling the stochastic errors of the inertial sensors. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data.

Keywords: Allan variance, accelerometer, gyroscope, stochastic errors

Procedia PDF Downloads 440
1933 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 647
1932 Classification of Contexts for Mentioning Love in Interviews with Victims of the Holocaust

Authors: Marina Yurievna Aleksandrova

Abstract:

Research of the Holocaust retains value not only for history but also for sociology and psychology. One of the most important fields of study is how people were coping during and after this traumatic event. The aim of this paper is to identify the main contexts of the topic of love and to determine which contexts are more characteristic for different groups of victims of the Holocaust (gender, nationality, age). In this research, transcripts of interviews with Holocaust victims that were collected during 1946 for the "Voices of the Holocaust" project were used as data. Main contexts were analyzed with methods of network analysis and latent semantic analysis and classified by gender, age, and nationality with random forest. The results show that love is articulated and described significantly differently for male and female informants, nationality is shown results with lower values of quality metrics, as well as the age.

Keywords: Holocaust, latent semantic analysis, network analysis, text-mining, random forest

Procedia PDF Downloads 179