Search results for: random parameters
10422 Texture-Based Image Forensics from Video Frame
Authors: Li Zhou, Yanmei Fang
Abstract:
With current technology, images and videos can be obtained more easily than ever. It is so easy to manipulate these digital multimedia information when obtained, and that the content or source of the image and video could be easily tampered. In this paper, we propose to identify the image and video frame by the texture-based approach, e.g. Markov Transition Probability (MTP), which is in space domain, DCT domain and DWT domain, respectively. In the experiment, image and video frame database is constructed, and is used to train and test the classifier Support Vector Machine (SVM). Experiment results show that the texture-based approach has good performance. In order to verify the experiment result, and testify the universality and robustness of algorithm, we build a random testing dataset, the random testing result is in keeping with above experiment.Keywords: multimedia forensics, video frame, LBP, MTP, SVM
Procedia PDF Downloads 42810421 Investigating the Efficiency of Stratified Double Median Ranked Set Sample for Estimating the Population Mean
Authors: Mahmoud I. Syam
Abstract:
Stratified double median ranked set sampling (SDMRSS) method is suggested for estimating the population mean. The SDMRSS is compared with the simple random sampling (SRS), stratified simple random sampling (SSRS), and stratified ranked set sampling (SRSS). It is shown that SDMRSS estimator is an unbiased of the population mean and more efficient than SRS, SSRS, and SRSS. Also, by SDMRSS, we can increase the efficiency of mean estimator for specific value of the sample size. SDMRSS is applied on real life examples, and the results of the example agreed the theoretical results.Keywords: efficiency, double ranked set sampling, median ranked set sampling, ranked set sampling, stratified
Procedia PDF Downloads 24710420 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29710419 Optimization of Friction Stir Spot Welding Process Parameters for Joining 6061 Aluminum Alloy Using Taguchi Method
Authors: Mohammed A. Tashkandi, Jawdat A. Al-Jarrah, Masoud Ibrahim
Abstract:
This paper investigates the shear strength of the joints produced by friction stir spot welding process (FSSW). FSSW parameters such as tool rotational speed, plunge depth, shoulder diameter of the welding tool and dwell time play the major role in determining the shear strength of the joints. The effect of these four parameters on FSSW process as well as the shear strength of the welded joints was studied via five levels of each parameter. Taguchi method was used to minimize the number of experiments required to determine the fracture load of the friction stir spot-welded joints by incorporating independently controllable FSSW parameters. Taguchi analysis was applied to optimize the FSSW parameters to attain the maximum shear strength of the spot weld for this type of aluminum alloy.Keywords: Friction Stir Spot Welding, Al6061 alloy, Shear Strength, FSSW process parameters
Procedia PDF Downloads 43410418 Leveraging SHAP Values for Effective Feature Selection in Peptide Identification
Authors: Sharon Li, Zhonghang Xia
Abstract:
Post-database search is an essential phase in peptide identification using tandem mass spectrometry (MS/MS) to refine peptide-spectrum matches (PSMs) produced by database search engines. These engines frequently face difficulty differentiating between correct and incorrect peptide assignments. Despite advances in statistical and machine learning methods aimed at improving the accuracy of peptide identification, challenges remain in selecting critical features for these models. In this study, two machine learning models—a random forest tree and a support vector machine—were applied to three datasets to enhance PSMs. SHAP values were utilized to determine the significance of each feature within the models. The experimental results indicate that the random forest model consistently outperformed the SVM across all datasets. Further analysis of SHAP values revealed that the importance of features varies depending on the dataset, indicating that a feature's role in model predictions can differ significantly. This variability in feature selection can lead to substantial differences in model performance, with false discovery rate (FDR) differences exceeding 50% between different feature combinations. Through SHAP value analysis, the most effective feature combinations were identified, significantly enhancing model performance.Keywords: peptide identification, SHAP value, feature selection, random forest tree, support vector machine
Procedia PDF Downloads 3010417 Hydrodynamic Characteristics of Single and Twin Offshore Rubble Mound Breakwaters under Regular and Random Waves
Authors: M. Alkhalidi, S. Neelamani, Z. Al-Zaqah
Abstract:
This paper investigates the interaction of single and twin offshore rubble mound breakwaters with regular and random water waves through physical modeling to assess their reflection, transmission and energy dissipation characteristics. Various combinations of wave heights and wave periods were utilized in a series of experiments, along with three different water depths. The single and twin permeable breakwater models were both constructed with one layer of rubbles. Both models had the same total volume; however, the single breakwater was of trapezoidal type while the twin breakwaters were of triangular type. Physical modeling experiments were carried out in the wave flume of the coastal engineering laboratory of Kuwait Institute for Scientific Research (KISR). Measurements of the six wave probes which were fixed in the two-dimensional wave flume were collected and used to determine the generated incident wave heights, as well as the reflected and transmitted wave heights resulting from the wave-breakwater interaction. The possible factors affecting the wave attenuation efficiency of the breakwater models are the relative water depth (d/L), wave steepness (H/L), relative wave height ((h-d)/Hi), relative height of the breakwater (h/d), and relative clear spacing between the twin breakwaters (S/h). The results indicated that the single and double breakwaters show different responds to the change in their relative height as well as the relative wave height which demonstrates that the effect of the relative water depth on wave reflection, transmission, and energy dissipation is highly influenced by the change in the relative breakwater height, the relative wave height and the relative breakwater spacing. In general, within the range of the relative water depth tested in this study, and under both regular and random waves, it is found that the single breakwater allows for lower wave transmission and shows higher energy dissipation effect than both of the tested twin breakwaters, and hence has the best overall performance.Keywords: random waves, regular waves, relative water depth, relative wave height, single breakwater, twin breakwater, wave steepness
Procedia PDF Downloads 32810416 Facial Recognition on the Basis of Facial Fragments
Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza
Abstract:
There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features
Procedia PDF Downloads 36110415 Code Mixing and Code-Switching Patterns in Kannada-English Bilingual Children and Adults Who Stutter
Authors: Vasupradaa Manivannan, Santosh Maruthy
Abstract:
Background/Aims: Preliminary evidence suggests that code-switching and code-mixing may act as one of the voluntary coping behavior to avoid the stuttering characteristics in children and adults; however, less is known about the types and patterns of code-mixing (CM) and code-switching (CS). Further, it is not known how it is different between children to adults who stutter. This study aimed to identify and compare the CM and CS patterns between Kannada-English bilingual children and adults who stutter. Method: A standard group comparison was made between five children who stutter (CWS) in the age range of 9-13 years and five adults who stutter (AWS) in the age range of 20-25 years. The participants who are proficient in Kannada (first language- L1) and English (second language- L2) were considered for the study. There were two tasks given to both the groups, a) General conversation (GC) with 10 random questions, b) Narration task (NAR) (Story / General Topic, for example., A Memorable Life Event) in three different conditions {Mono Kannada (MK), Mono English (ME), and Bilingual (BIL) Condition}. The children and adults were assessed online (via Zoom session) with a high-quality internet connection. The audio and video samples of the full assessment session were auto-recorded and manually transcribed. The recorded samples were analyzed for the percentage of dysfluencies using SSI-4 and CM, and CS exhibited in each participant using Matrix Language Frame (MLF) model parameters. The obtained data were analyzed using the Statistical Package for the Social Sciences (SPSS) software package (Version 20.0). Results: The mean, median, and standard deviation values were obtained for the percentage of dysfluencies (%SS) and frequency of CM and CS in Kannada-English bilingual children and adults who stutter for various parameters obtained through the MLF model. The inferential results indicated that %SS significantly varied between population (AWS vs CWS), languages (L1 vs L2), and tasks (GC vs NAR) but not across free (BIL) and bound (MK, ME) conditions. It was also found that the frequency of CM and CS patterns varies between CWS and AWS. The AWS had a lesser %SS but greater use of CS patterns than CWS, which is due to their excessive coping skills. The language mixing patterns were more observed in L1 than L2, and it was significant in most of the MLF parameters. However, there was a significantly higher (P<0.05) %SS in L2 than L1. The CS and CS patterns were more in conditions 1 and 3 than 2, which may be due to the higher proficiency of L2 than L1. Conclusion: The findings highlight the importance of assessing the CM and CS behaviors, their patterns, and the frequency of CM and CS between CWS and AWS on MLF parameters in two different tasks across three conditions. The results help us to understand CM and CS strategies in bilingual persons who stutter.Keywords: bilinguals, code mixing, code switching, stuttering
Procedia PDF Downloads 7810414 Security of Database Using Chaotic Systems
Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem
Abstract:
Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST
Procedia PDF Downloads 26510413 Reliability Modeling on Drivers’ Decision during Yellow Phase
Authors: Sabyasachi Biswas, Indrajit Ghosh
Abstract:
The random and heterogeneous behavior of vehicles in India puts up a greater challenge for researchers. Stop-and-go modeling at signalized intersections under heterogeneous traffic conditions has remained one of the most sought-after fields. Vehicles are often caught up in the dilemma zone and are unable to take quick decisions whether to stop or cross the intersection. This hampers the traffic movement and may lead to accidents. The purpose of this work is to develop a stop and go prediction model that depicts the drivers’ decision during the yellow time at signalised intersections. To accomplish this, certain traffic parameters were taken into account to develop surrogate model. This research investigated the Stop and Go behavior of the drivers by collecting data from 4-signalized intersections located in two major Indian cities. Model was developed to predict the drivers’ decision making during the yellow phase of the traffic signal. The parameters used for modeling included distance to stop line, time to stop line, speed, and length of the vehicle. A Kriging base surrogate model has been developed to investigate the drivers’ decision-making behavior in amber phase. It is observed that the proposed approach yields a highly accurate result (97.4 percent) by Gaussian function. It was observed that the accuracy for the crossing probability was 95.45, 90.9 and 86.36.11 percent respectively as predicted by the Kriging models with Gaussian, Exponential and Linear functions.Keywords: decision-making decision, dilemma zone, surrogate model, Kriging
Procedia PDF Downloads 30910412 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials
Authors: Aleš Florian, Lenka Ševelová
Abstract:
Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.Keywords: concrete, FEM, pavement, sensitivity, simulation
Procedia PDF Downloads 33010411 Suitability Number of Coarse-Grained Soils and Relationships among Fineness Modulus, Density and Strength Parameters
Authors: Khandaker Fariha Ahmed, Md. Noman Munshi, Tarin Sultana, Md. Zoynul Abedin
Abstract:
Suitability number (SN) is perhaps one of the most important parameters of coarse-grained soil in assessing its appropriateness to use as a backfill in retaining structures, sand compaction pile, Vibro compaction, and other similar foundation and ground improvement works. Though determined in an empirical manner, it is imperative to study SN to understand its relation with other aggregate properties like fineness modulus (FM), and strength and density properties of sandy soil. The present paper reports the findings of the study on the examination of the properties of sandy soil, as mentioned. Random numbers were generated to obtain the percent fineness on various sieve sizes, and fineness modulus and suitability numbers were predicted. Sand samples were collected from the field, and test samples were prepared to determine maximum density, minimum density and shear strength parameter φ against particular fineness modulus and corresponding suitability number Five samples of SN value of excellent (0-10) and three samples of SN value fair (20-30) were taken and relevant tests were done. The data obtained from the laboratory tests were statistically analyzed. Results show that with the increase of SN, the value of FM decreases. Within the SN value rated as excellent (0-10), there is a decreasing trend of φ for a higher value of SN. It is found that SN is dependent on various combinations of grain size properties like D10, D30, and D20, D50. Strong linear relationships were obtained between SN and FM (R²=.0.93) and between SN value and φ (R²=.94). Correlation equations are proposed to define relationships among SN, φ, and FM.Keywords: density, fineness modulus, shear strength parameter, suitability number
Procedia PDF Downloads 10510410 Room Level Indoor Localization Using Relevant Channel Impulse Response Parameters
Authors: Raida Zouari, Iness Ahriz, Rafik Zayani, Ali Dziri, Ridha Bouallegue
Abstract:
This paper proposes a room level indoor localization algorithm based on the use Multi-Layer Neural Network (MLNN) classifiers and one versus one strategy. Seven parameters of the Channel Impulse Response (CIR) were used and Gram-Shmidt Orthogonalization was performed to study the relevance of the extracted parameters. Simulation results show that when relevant CIR parameters are used as position fingerprint and when optimal MLNN architecture is selected good room level localization score can be achieved. The current study showed also that some of the CIR parameters are not correlated to the location and can decrease the localization performance of the system.Keywords: mobile indoor localization, multi-layer neural network (MLNN), channel impulse response (CIR), Gram-Shmidt orthogonalization
Procedia PDF Downloads 36210409 Parameters Tuning of a PID Controller on a DC Motor Using Honey Bee and Genetic Algorithms
Authors: Saeid Jalilzadeh
Abstract:
PID controllers are widely used to control the industrial plants because of their robustness and simple structures. Tuning of the controller's parameters to get a desired response is difficult and time consuming. With the development of computer technology and artificial intelligence in automatic control field, all kinds of parameters tuning methods of PID controller have emerged in endlessly, which bring much energy for the study of PID controller, but many advanced tuning methods behave not so perfect as to be expected. Honey Bee algorithm (HBA) and genetic algorithm (GA) are extensively used for real parameter optimization in diverse fields of study. This paper describes an application of HBA and GA to the problem of designing a PID controller whose parameters comprise proportionality constant, integral constant and derivative constant. Presence of three parameters to optimize makes the task of designing a PID controller more challenging than conventional P, PI, and PD controllers design. The suitability of the proposed approach has been demonstrated through computer simulation using MATLAB/SIMULINK.Keywords: controller, GA, optimization, PID, PSO
Procedia PDF Downloads 54410408 Low Cost Inertial Sensors Modeling Using Allan Variance
Authors: A. A. Hussen, I. N. Jleta
Abstract:
Micro-electromechanical system (MEMS) accelerometers and gyroscopes are suitable for the inertial navigation system (INS) of many applications due to the low price, small dimensions and light weight. The main disadvantage in a comparison with classic sensors is a worse long term stability. The estimation accuracy is mostly affected by the time-dependent growth of inertial sensor errors, especially the stochastic errors. In order to eliminate negative effect of these random errors, they must be accurately modeled. Where the key is the successful implementation that depends on how well the noise statistics of the inertial sensors is selected. In this paper, the Allan variance technique will be used in modeling the stochastic errors of the inertial sensors. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data.Keywords: Allan variance, accelerometer, gyroscope, stochastic errors
Procedia PDF Downloads 44210407 Effect of Electron Beam Irradiated Cottonseed Meal on Carcass and Blood Parameters of Broiler Chickens
Authors: Somayyeh Salari, Marziyeh Nayefi, Mohsen Sari, Mehdi Behgar
Abstract:
This study was conducted to evaluate the effect of electron beam- irradiated cottonseed meal at a dose of 30 KGy on carcass characteristics and some blood parameters of broiler chicks. Various levels of cottonseed meal (CSM) (0, 12, and 24%, radiation and no radiation) were used with 5 dietary treatments, 4 replicates and 10 birds of each for 42 days in completely randomized design. At 42 d of age, two birds per pen were randomly selected for determination of carcass characteristics and blood parameters. Relative weights of liver, gastrointestinal tract (GI), pancreatic, gizzard and abdominal fat were increased with increasing levels of CSM in the diet (p<0/05). Glucose, cholesterol, HDL, triglyceride, and phosphorous concentrations increased and LDL concentration decreased as the dietary CSM levels increased (p<0/05). But radiation had not significant effect on blood parameters. Electron irradiation seems to be a good procedure to improve the nutritional quality of CSM but it seems higher dose of it was needed to improve blood parameters of chickens.Keywords: blood parameters, carcass characteristics, cottonseed meal, electron beam
Procedia PDF Downloads 48410406 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 7310405 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 64810404 Classification of Contexts for Mentioning Love in Interviews with Victims of the Holocaust
Authors: Marina Yurievna Aleksandrova
Abstract:
Research of the Holocaust retains value not only for history but also for sociology and psychology. One of the most important fields of study is how people were coping during and after this traumatic event. The aim of this paper is to identify the main contexts of the topic of love and to determine which contexts are more characteristic for different groups of victims of the Holocaust (gender, nationality, age). In this research, transcripts of interviews with Holocaust victims that were collected during 1946 for the "Voices of the Holocaust" project were used as data. Main contexts were analyzed with methods of network analysis and latent semantic analysis and classified by gender, age, and nationality with random forest. The results show that love is articulated and described significantly differently for male and female informants, nationality is shown results with lower values of quality metrics, as well as the age.Keywords: Holocaust, latent semantic analysis, network analysis, text-mining, random forest
Procedia PDF Downloads 18110403 Analysis of the Engineering Judgement Influence on the Selection of Geotechnical Parameters Characteristic Values
Authors: K. Ivandic, F. Dodigovic, D. Stuhec, S. Strelec
Abstract:
A characteristic value of certain geotechnical parameter results from an engineering assessment. Its selection has to be based on technical principles and standards of engineering practice. It has been shown that the results of engineering assessment of different authors for the same problem and input data are significantly dispersed. A survey was conducted in which participants had to estimate the force that causes a 10 cm displacement at the top of a axially in-situ compressed pile. Fifty experts from all over the world took part in it. The lowest estimated force value was 42% and the highest was 133% of measured force resulting from a mentioned static pile load test. These extreme values result in significantly different technical solutions to the same engineering task. In case of selecting a characteristic value of a geotechnical parameter the importance of the influence of an engineering assessment can be reduced by using statistical methods. An informative annex of Eurocode 1 prescribes the method of selecting the characteristic values of material properties. This is followed by Eurocode 7 with certain specificities linked to selecting characteristic values of geotechnical parameters. The paper shows the procedure of selecting characteristic values of a geotechnical parameter by using a statistical method with different initial conditions. The aim of the paper is to quantify an engineering assessment in the example of determining a characteristic value of a specific geotechnical parameter. It is assumed that this assessment is a random variable and that its statistical features will be determined. For this purpose, a survey research was conducted among relevant experts from the field of geotechnical engineering. Conclusively, the results of the survey and the application of statistical method were compared.Keywords: characteristic values, engineering judgement, Eurocode 7, statistical methods
Procedia PDF Downloads 29710402 Modeling and Optimization of Performance of Four Stroke Spark Ignition Injector Engine
Authors: A. A. Okafor, C. H. Achebe, J. L. Chukwuneke, C. G. Ozoegwu
Abstract:
The performance of an engine whose basic design parameters are known can be predicted with the assistance of simulation programs into the less time, cost and near value of actual. This paper presents a comprehensive mathematical model of the performance parameters of four stroke spark ignition engine. The essence of this research work is to develop a mathematical model for the analysis of engine performance parameters of four stroke spark ignition engine before embarking on full scale construction, this will ensure that only optimal parameters are in the design and development of an engine and also allow to check and develop the design of the engine and it’s operation alternatives in an inexpensive way and less time, instead of using experimental method which requires costly research test beds. To achieve this, equations were derived which describe the performance parameters (sfc, thermal efficiency, mep and A/F). The equations were used to simulate and optimize the engine performance of the model for various engine speeds. The optimal values obtained for the developed bivariate mathematical models are: sfc is 0.2833kg/kwh, efficiency is 28.77% and a/f is 20.75.Keywords: bivariate models, engine performance, injector engine, optimization, performance parameters, simulation, spark ignition
Procedia PDF Downloads 32710401 Physical Characterization of a Watershed for Correlation with Parameters of Thomas Hydrological Model and Its Application in Iber Hidrodinamic Model
Authors: Carlos Caro, Ernest Blade, Nestor Rojas
Abstract:
This study determined the relationship between basic geo-technical parameters and parameters of the hydro logical model Thomas for water balance of rural watersheds, as a methodological calibration application, applicable in distributed models as IBER model, which represents a distributed system simulation models for unsteady flow numerical free surface. There was an exploration in 25 points (on 15 sub) basin of Rio Piedras (Boy.) obtaining soil samples, to which geo-technical characterization was performed by laboratory tests. Thomas model has a physical characterization of the input area by only four parameters (a, b, c, d). Achieve measurable relationship between geo technical parameters and 4 values of hydro logical parameters helps to determine subsurface, underground and surface flow more agile manner. It is intended in this way to reach some solutions regarding limits initial model parameters on the basis of Thomas geo-technical characterization. In hydro geological models of rural watersheds, calibration is an important process in the characterization of the study area. This step can require a significant computational cost and time, especially if the initial values or parameters before calibration are outside of the geo-technical reality. A better approach in these initial values means optimization of these process through a geo-technical materials area, where is obtained an important approach to the study as in the starting range of variation for the calibration parameters.Keywords: distributed hydrology, hydrological and geotechnical characterization, Iber model
Procedia PDF Downloads 52210400 Rounding Technique's Application in Schnorr Signature Algorithm: Known Partially Most Significant Bits of Nonce
Authors: Wenjie Qin, Kewei Lv
Abstract:
In 1996, Boneh and Venkatesan proposed the Hidden Number Problem (HNP) and proved the most significant bits (MSB) of computational Diffie-Hellman key exchange scheme and related schemes are unpredictable bits. They also gave a method which is a lattice rounding technique to solve HNP in non-uniform model. In this paper, we put forward a new concept that is Schnorr-MSB-HNP. We also reduce the problem of solving Schnorr signature private key with a few consecutive most significant bits of random nonce (used at each signature generation) to Schnorr-MSB-HNP, then we use the rounding technique to solve the Schnorr-MSB-HNP. We have come to the conclusion that if there is a ‘miraculous box’ which inputs the random nonce and outputs 2loglogq (q is a prime number) most significant bits of nonce, the signature private key will be obtained by choosing 2logq signature messages randomly. Thus we get an attack on the Schnorr signature private key.Keywords: rounding technique, most significant bits, Schnorr signature algorithm, nonce, Schnorr-MSB-HNP
Procedia PDF Downloads 23410399 Efficient Credit Card Fraud Detection Based on Multiple ML Algorithms
Authors: Neha Ahirwar
Abstract:
In the contemporary digital era, the rise of credit card fraud poses a significant threat to both financial institutions and consumers. As fraudulent activities become more sophisticated, there is an escalating demand for robust and effective fraud detection mechanisms. Advanced machine learning algorithms have become crucial tools in addressing this challenge. This paper conducts a thorough examination of the design and evaluation of a credit card fraud detection system, utilizing four prominent machine learning algorithms: random forest, logistic regression, decision tree, and XGBoost. The surge in digital transactions has opened avenues for fraudsters to exploit vulnerabilities within payment systems. Consequently, there is an urgent need for proactive and adaptable fraud detection systems. This study addresses this imperative by exploring the efficacy of machine learning algorithms in identifying fraudulent credit card transactions. The selection of random forest, logistic regression, decision tree, and XGBoost for scrutiny in this study is based on their documented effectiveness in diverse domains, particularly in credit card fraud detection. These algorithms are renowned for their capability to model intricate patterns and provide accurate predictions. Each algorithm is implemented and evaluated for its performance in a controlled environment, utilizing a diverse dataset comprising both genuine and fraudulent credit card transactions.Keywords: efficient credit card fraud detection, random forest, logistic regression, XGBoost, decision tree
Procedia PDF Downloads 6810398 Sensitivity Analysis of Pile-Founded Fixed Steel Jacket Platforms
Authors: Mohamed Noureldin, Jinkoo Kim
Abstract:
The sensitivity of the seismic response parameters to the uncertain modeling variables of pile-founded fixed steel jacket platforms are investigated using tornado diagram, first-order second-moment, and static pushover analysis techniques. The effects of both aleatory and epistemic uncertainty on seismic response parameters have been investigated for an existing offshore platform. The sources of uncertainty considered in the present study are categorized into three different categories: the uncertainties associated with the soil-pile modeling parameters in clay soil, the platform jacket structure modeling parameters, and the uncertainties related to ground motion excitations. It has been found that the variability in parameters such as yield strength or pile bearing capacity has almost no effect on the seismic response parameters considered, whereas the global structural response is highly affected by the ground motion uncertainty. Also, some uncertainty in soil-pile property such as soil-pile friction capacity has a significant impact on the response parameters and should be carefully modeled. Based on the results, it is highlighted that which uncertain parameters should be considered carefully and which can be assumed with reasonable engineering judgment during the early structural design stage of fixed steel jacket platforms.Keywords: fixed jacket offshore platform, pile-soil structure interaction, sensitivity analysis
Procedia PDF Downloads 37510397 On a Single Server Queue with Arrivals in Batches of Variable Size, Generalized Coxian-2 Service and Compulsory Server Vacations
Authors: Kailash C. Madan
Abstract:
We study the steady state behaviour of a batch arrival single server queue in which the first service with general service times is compulsory and the second service with general service times is optional. We term such a two phase service as generalized Coxian-2 service. Just after completion of a service the server must take a vacation of random length of time with general vacation times. We obtain steady state probability generating functions for the queue size as well as the steady state mean queue size at a random epoch of time in explicit and closed forms. Some particular cases of interest including some known results have been derived.Keywords: batch arrivals, compound Poisson process, generalized Coxian-2 service, steady state
Procedia PDF Downloads 45710396 Mean Square Responses of a Cantilever Beam with Various Damping Mechanisms
Authors: Yaping Zhao, Yimin Zhang
Abstract:
In the present paper, the stationary random vibration of a uniform cantilever beam is investigated. Two types of damping mechanism, i.e. the external and internal viscous dampings, are taken into account simultaneously. The excitation form is the support motion, and it is ideal white. Because two type of damping mechanism are considered concurrently, the product of the modal damping ratio and the natural frequency is not a constant anymore. As a result, the infinite definite integral encountered in the process of computing the mean square response is more complex than that in the existing literature. One signal progress of this work is to have calculated these definite integrals accurately. The precise solution of the mean square response is thus obtained in the infinite series form finally. Numerical examples are supplied and the numerical outcomes acquired confirm the validity of the theoretical analyses.Keywords: random vibration, cantilever beam, mean square response, white noise
Procedia PDF Downloads 38410395 Determination of LS-DYNA MAT162 Material input Parameters for Low Velocity Impact Analysis of Layered Composites
Authors: Mustafa Albayrak, Mete Onur Kaman, Ilyas Bozkurt
Abstract:
In this study, the necessary material parameters were determined to be able to conduct progressive damage analysis of layered composites under low velocity impact by using the MAT162 material module in the LS-DYNA program. The material module MAT162 based on Hashin failure criterion requires 34 parameters in total. Some of these parameters were obtained directly as a result of dynamic and quasi-static mechanical tests, and the remaining part was calibrated and determined by comparing numerical and experimental results. Woven glass/epoxy was used as the composite material and it was produced by vacuum infusion method. In the numerical model, composites are modeled as three-dimensional and layered. As a result, the acquisition of MAT162 material module parameters, which will enable progressive damage analysis, is given in detail and step by step, and the selection methods of the parameters are explained. Numerical data consistent with the experimental results are given in graphics.Keywords: Composite Impact, Finite Element Simulation, Progressive Damage Analyze, LS-DYNA, MAT162
Procedia PDF Downloads 10810394 Evaluation of Reliability Indices Using Monte Carlo Simulation Accounting Time to Switch
Authors: Sajjad Asefi, Hossein Afrakhte
Abstract:
This paper presents the evaluation of reliability indices of an electrical distribution system using Monte Carlo simulation technique accounting Time To Switch (TTS) for each section. In this paper, the distribution system has been assumed by accounting random repair time omission. For simplicity, we have assumed the reliability analysis to be based on exponential law. Each segment has a specified rate of failure (λ) and repair time (r) which will give us the mean up time and mean down time of each section in distribution system. After calculating the modified mean up time (MUT) in years, mean down time (MDT) in hours and unavailability (U) in h/year, TTS have been added to the time which the system is not available, i.e. MDT. In this paper, we have assumed the TTS to be a random variable with Log-Normal distribution.Keywords: distribution system, Monte Carlo simulation, reliability, repair time, time to switch (TTS)
Procedia PDF Downloads 42710393 Singular Value Decomposition Based Optimisation of Design Parameters of a Gearbox
Authors: Mehmet Bozca
Abstract:
Singular value decomposition based optimisation of geometric design parameters of a 5-speed gearbox is studied. During the optimisation, a four-degree-of freedom torsional vibration model of the pinion gear-wheel gear system is obtained and the minimum singular value of the transfer matrix is considered as the objective functions. The computational cost of the associated singular value problems is quite low for the objective function, because it is only necessary to compute the largest and smallest singular values (µmax and µmin) that can be achieved by using selective eigenvalue solvers; the other singular values are not needed. The design parameters are optimised under several constraints that include bending stress, contact stress and constant distance between gear centres. Thus, by optimising the geometric parameters of the gearbox such as, the module, number of teeth and face width it is possible to obtain a light-weight-gearbox structure. It is concluded that the all optimised geometric design parameters also satisfy all constraints.Keywords: Singular value, optimisation, gearbox, torsional vibration
Procedia PDF Downloads 360