Search results for: Random Numbers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3142

Search results for: Random Numbers

2752 Vertical Uplift Capacity of a Group of Equally Spaced Helical Screw Anchors in Sand

Authors: Sanjeev Mukherjee, Satyendra Mittal

Abstract:

This paper presents the experimental investigations on the behaviour of a group of single, double and triple helical screw anchors embedded vertically at the same level in sand. The tests were carried out on one, two, three and four numbers of anchors in sand for different depths of embedment keeping shallow and deep mode of behaviour in mind. The testing program included 48 tests conducted on three model anchors installed in sand whose density kept constant throughout the tests. It was observed that the ultimate pullout load varied significantly with the installation depth of the anchor and the number of anchors. The apparent coefficient of friction (f*) between anchor and soil was also calculated based on the test results. It was found that the apparent coefficient of friction varies between 1.02 and 4.76 for 1, 2, 3, and 4 numbers of single, double and triple helical screw anchors. Plate load tests conducted on model soil showed that the value of ф increases from 35o for virgin soil to 48o for soil with four double screw helical anchors. The graphs of ultimate pullout capacity of a group of two, three and four no. of anchors with respect to one anchor were plotted and design equations have been proposed correlating them. Based on these findings, it has been concluded that the load-displacement relationships for all groups can be reduced to a common curve. A 3-D finite element model, PLAXIS, was used to confirm the results obtained from laboratory tests and the agreement is excellent.

Keywords: apparent coefficient of friction, helical screw anchor, installation depth, plate load test

Procedia PDF Downloads 535
2751 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing

Authors: Carolina Gouveia, José Vieira, Pedro Pinho

Abstract:

The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.

Keywords: bio-signals, DC component, Doppler effect, ellipse fitting, radar, SDR

Procedia PDF Downloads 99
2750 Stock Prediction and Portfolio Optimization Thesis

Authors: Deniz Peksen

Abstract:

This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.

Keywords: stock prediction, portfolio optimization, data science, machine learning

Procedia PDF Downloads 54
2749 The Mass Attenuation Coefficients, Effective Atomic Cross Sections, Effective Atomic Numbers and Electron Densities of Some Halides

Authors: Shivalinge Gowda

Abstract:

The total mass attenuation coefficients m/r, of some halides such as, NaCl, KCl, CuCl, NaBr, KBr, RbCl, AgCl, NaI, KI, AgBr, CsI, HgCl2, CdI2 and HgI2 were determined at photon energies 279.2, 320.07, 514.0, 661.6, 1115.5, 1173.2 and 1332.5 keV in a well-collimated narrow beam good geometry set-up using a high resolution, hyper pure germanium detector. The mass attenuation coefficients and the effective atomic cross sections are found to be in good agreement with the XCOM values. From these mass attenuation coefficients, the effective atomic cross sections sa, of the compounds were determined. These effective atomic cross section sa data so obtained are then used to compute the effective atomic numbers Zeff. For this, the interpolation of total attenuation cross-sections of photons of energy E in elements of atomic number Z was performed by using the logarithmic regression analysis of the data measured by the authors and reported earlier for the above said energies along with XCOM data for standard energies. The best-fit coefficients in the photon energy range of 250 to 350 keV, 350 to 500 keV, 500 to 700 keV, 700 to 1000 keV and 1000 to 1500 keV by a piecewise interpolation method were then used to find the Zeff of the compounds with respect to the effective atomic cross section sa from the relation obtained by piece wise interpolation method. Using these Zeff values, the electron densities Nel of halides were also determined. The present Zeff and Nel values of halides are found to be in good agreement with the values calculated from XCOM data and other available published values.

Keywords: mass attenuation coefficient, atomic cross-section, effective atomic number, electron density

Procedia PDF Downloads 352
2748 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic

Authors: Diogen Babuc

Abstract:

The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.

Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison

Procedia PDF Downloads 79
2747 Evaluation of Developmental Toxicity and Teratogenicity of Perfluoroalkyl Compounds Using FETAX

Authors: Hyun-Kyung Lee, Jehyung Oh, Young Eun Jeong, Hyun-Shik Lee

Abstract:

Perfluoroalkyl compounds (PFCs) are environmental toxicants that persistently accumulate in the human blood. Their widespread detection and accumulation in the environment raise concerns about whether these chemicals might be developmental toxicants and teratogens in the ecosystem. We evaluated and compared the toxicity of PFCs of containing various numbers of carbon atoms (C8-11 carbons) on vertebrate embryogenesis. We assessed the developmental toxicity and teratogenicity of various PFCs. The toxic effects on Xenopus embryos were evaluated using different methods. We measured teratogenic indices (TIs) and investigated the mechanisms underlying developmental toxicity and teratogenicity by measuring the expression of organ-specific biomarkers such as xPTB (liver), Nkx2.5 (heart), and Cyl18 (intestine). All PFCs that we tested were found to be developmental toxicants and teratogens. Their toxic effects were strengthened with increasing length of the fluorinated carbon chain. Furthermore, we produced evidence showing that perfluorodecanoic acid (PFDA) and perfluoroundecanoic acid (PFuDA) are more potent developmental toxicants and teratogens in an animal model compared to the other PFCs we evaluated [perfluorooctanoic acid (PFOA) and perfluorononanoic acid (PFNA)]. In particular, severe defects resulting from PFDA and PFuDA exposure were observed in the liver and heart, respectively, using the whole mount in situ hybridization, real-time PCR, pathologic analysis of the heart, and dissection of the liver. Our studies suggest that most PFCs are developmental toxicants and teratogens, however, compounds that have higher numbers of carbons (i.e., PFDA and PFuDA) exert more potent effects.

Keywords: PFC, xenopus, fetax, development

Procedia PDF Downloads 326
2746 Parameter Estimation for Contact Tracing in Graph-Based Models

Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar

Abstract:

We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.

Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference

Procedia PDF Downloads 52
2745 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 93
2744 Numerical Analysis of Core-Annular Blood Flow in Microvessels at Low Reynolds Numbers

Authors: L. Achab, F. Iachachene

Abstract:

In microvessels, red blood cells (RBCs) exhibit a tendency to migrate towards the vessel center, establishing a core-annular flow pattern. The core region, marked by a high concentration of RBCs, is governed by significantly non-Newtonian viscosity. Conversely, the annular layer, composed of cell-free plasma, is characterized by Newtonian low viscosity. This property enables the plasma layer to act as a lubricant for the vessel walls, efficiently reducing resistance to the movement of blood cells. In this study, we investigate the factors influencing blood flow in microvessels and the thickness of the annular plasma layer using a non-miscible fluids approach in a 2D axisymmetric geometry. The governing equations of an incompressible unsteady flow are solved numerically through the Volume of Fluid (VOF) method to track the interface between the two immiscible fluids. To model blood viscosity in the core region, we adopt the Quemada constitutive law which is accurately captures the shear-thinning blood rheology over a wide range of shear rates. Our results are then compared to an established theoretical approach under identical flow conditions, particularly concerning the radial velocity profile and the thickness of the annular plasma layer. The simulation findings for low Reynolds numbers, demonstrate a notable agreement with the theoretical solution, emphasizing the pivotal role of blood’s rheological properties in the core region in determining the thickness of the annular plasma layer.

Keywords: core-annular flows, microvessels, Quemada model, plasma layer thickness, volume of fluid method

Procedia PDF Downloads 27
2743 On Paranorm Zweier I-Convergent Sequence Spaces

Authors: Nazneen Khan, Vakeel A. Khan

Abstract:

In this article we introduce the Paranorm Zweier I-convergent sequence spaces, for a sequence of positive real numbers. We study some topological properties, prove the decomposition theorem and study some inclusion relations on these spaces.

Keywords: ideal, filter, I-convergence, I-nullity, paranorm

Procedia PDF Downloads 455
2742 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 189
2741 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models

Authors: Jay L. Fu

Abstract:

Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.

Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction

Procedia PDF Downloads 119
2740 The Effects of Three Levels of Contextual Inference among adult Athletes

Authors: Abdulaziz Almustafa

Abstract:

Considering the critical role permanence has on predictions related to the contextual interference effect on laboratory and field research, this study sought to determine whether the paradigm of the effect depends on the complexity of the skill during the acquisition and transfer phases. The purpose of the present study was to investigate the effects of contextual interference CI by extending previous laboratory and field research with adult athletes through the acquisition and transfer phases. Male (n=60) athletes age 18-22 years-old, were chosen randomly from Eastern Province Clubs. They were assigned to complete blocked, random, or serial practices. Analysis of variance with repeated measures MANOVA indicated that, the results did not support the notion of CI. There were no significant differences in acquisition phase between blocked, serial and random practice groups. During the transfer phase, there were no major differences between the practice groups. Apparently, due to the task complexity, participants were probably confused and not able to use the advantages of contextual interference. This is another contradictory result to contextual interference effects in acquisition and transfer phases in sport settings. One major factor that can influence the effect of contextual interference is task characteristics as the nature of level of difficulty in sport-related skill.

Keywords: contextual interference, acquisition, transfer, task difficulty

Procedia PDF Downloads 434
2739 Enhanced Test Scheme based on Programmable Write Time for Future Computer Memories

Authors: Nor Zaidi Haron, Fauziyah Salehuddin, Norsuhaidah Arshad, Sani Irwan Salim

Abstract:

Resistive random access memories (RRAMs) are one of the main candidates for future computer memories. However, due to their tiny size and immature device technology, the quality of the outgoing RRAM chips is seen as a serious issue. Defective RRAM cells might behave differently than existing semiconductor memories (Dynamic RAM, Static RAM, and Flash), meaning that they are difficult to be detected using existing test schemes. This paper presents an enhanced test scheme, referred to as Programmable Short Write Time (PSWT) that is able to improve the detection of faulty RRAM cells. It is developed by applying multiple weak write operations, each with different time durations. The test circuit embedded in the RRAM chip is made programmable in order to supply different weak write times during testing. The RRAM electrical model is described using Verilog-AMS language and is simulated using HSPICE simulation tools. Simulation results show that the proposed test scheme offers better open-resistive fault detection compared to existing test schemes.

Keywords: memory fault, memory test, design-for-testability, resistive random access memory

Procedia PDF Downloads 353
2738 Conscious Intention-based Processes Impact the Neural Activities Prior to Voluntary Action on Reinforcement Learning Schedules

Authors: Xiaosheng Chen, Jingjing Chen, Phil Reed, Dan Zhang

Abstract:

Conscious intention can be a promising point cut to grasp consciousness and orient voluntary action. The current study adopted a random ratio (RR), yoked random interval (RI) reinforcement learning schedule instead of the previous highly repeatable and single decision point paradigms, aimed to induce voluntary action with the conscious intention that evolves from the interaction between short-range-intention and long-range-intention. Readiness potential (RP) -like-EEG amplitude and inter-trial-EEG variability decreased significantly prior to voluntary action compared to cued action for inter-trial-EEG variability, mainly featured during the earlier stage of neural activities. Notably, (RP) -like-EEG amplitudes decreased significantly prior to higher RI-reward rates responses in which participants formed a higher plane of conscious intention. The present study suggests the possible contribution of conscious intention-based processes to the neural activities from the earlier stage prior to voluntary action on reinforcement leanring schedule.

Keywords: Reinforcement leaning schedule, voluntary action, EEG, conscious intention, readiness potential

Procedia PDF Downloads 52
2737 Stochastic Modeling and Productivity Analysis of a Flexible Manufacturing System

Authors: Mehmet Savsar, Majid Aldaihani

Abstract:

Flexible Manufacturing Systems (FMS) are used to produce a variety of parts on the same equipment. Therefore, their utilization is higher than traditional machining systems. Higher utilization, on the other hand, results in more frequent equipment failures and additional need for maintenance. Therefore, it is necessary to carefully analyze operational characteristics and productivity of FMS or Flexible Manufacturing Cells (FMC), which are smaller configuration of FMS, before installation or during their operation. Appropriate models should be developed to determine production rates based on operational conditions, including equipment reliability, availability, and repair capacity. In this paper, a stochastic model is developed for an automated FMC system, which consists of two machines served by two robots and a single repairman. The model is used to determine system productivity and equipment utilization under different operational conditions, including random machine failures, random repairs, and limited repair capacity. The results are compared to previous study results for FMC system with sufficient repair capacity assigned to each machine. The results show that the model will be useful for design engineers and operational managers to analyze performance of manufacturing systems at the design or operational stages.

Keywords: flexible manufacturing, FMS, FMC, stochastic modeling, production rate, reliability, availability

Procedia PDF Downloads 492
2736 Experimental Study of Flow Characteristics for a Cylinder with Respect to Attached Flexible Strip Body of Various Reynolds Number

Authors: S. Teksin, S. Yayla

Abstract:

The aim of the present study was to investigate details of flow structure in downstream of a circular cylinder base mounted on a flat surface in a rectangular duct with the dimensions of 8000 x 1000 x 750 mm in deep water flow for the Reynolds number 2500, 5000 and 7500. A flexible strip was attached to behind the cylinder and compared the bare body. Also, it was analyzed that how boundary layer affects the structure of flow around the cylinder. Diameter of the cylinder was 60 mm and the length of the flexible splitter plate which had a certain modulus of elasticity was 150 mm (L/D=2.5). Time-averaged velocity vectors, vortex contours, streamwise and transverse velocity components were investigated via Particle Image Velocimetry (PIV). Velocity vectors and vortex contours were displayed through the sections in which boundary layer effect was not present. On the other hand, streamwise and transverse velocity components were monitored for both cases, i.e. with and without boundary layer effect. Experiment results showed that the vortex formation occured in a larger area for L/D=2.5 and the point where the vortex was maximum from the base of the cylinder was shifted. Streamwise and transverse velocity component contours were symmetrical with reference to the center of the cylinder for all cases. All Froud numbers based on the Reynolds numbers were quite smaller than 1. The flow characteristics of velocity component values of attached circular cylinder arrangement decreased approximately twenty five percent comparing to bare cylinder case.

Keywords: partical image velocimetry, elastic plate, cylinder, flow structure

Procedia PDF Downloads 291
2735 Analysis and Design of Offshore Triceratops under Ultra-Deep Waters

Authors: Srinivasan Chandrasekaran, R. Nagavinothini

Abstract:

Offshore platforms for ultra-deep waters are form-dominant by design; hybrid systems with large flexibility in horizontal plane and high rigidity in vertical plane are preferred due to functional complexities. Offshore triceratops is relatively a new-generation offshore platform, whose deck is partially isolated from the supporting buoyant legs by ball joints. They allow transfer of partial displacements of buoyant legs to the deck but restrain transfer of rotational response. Buoyant legs are in turn taut-moored to the sea bed using pre-tension tethers. Present study will discuss detailed dynamic analysis and preliminary design of the chosen geometric, which is necessary as a proof of validation for such design applications. A detailed numeric analysis of triceratops at 2400 m water depth under random waves is presented. Preliminary design confirms member-level design requirements under various modes of failure. Tether configuration, proposed in the study confirms no pull-out of tethers as stress variation is comparatively lesser than the yield value. Presented study shall aid offshore engineers and contractors to understand suitability of triceratops, in terms of design and dynamic response behaviour.

Keywords: offshore structures, triceratops, random waves, buoyant legs, preliminary design, dynamic analysis

Procedia PDF Downloads 179
2734 A Study of Algebraic Structure Involving Banach Space through Q-Analogue

Authors: Abdul Hakim Khan

Abstract:

The aim of the present paper is to study the Banach Space and Combinatorial Algebraic Structure of R. It is further aimed to study algebraic structure of set of all q-extension of classical formula and function for 0 < q < 1.

Keywords: integral functions, q-extensions, q numbers of metric space, algebraic structure of r and banach space

Procedia PDF Downloads 550
2733 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 76
2732 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 334
2731 Using the SMT Solver to Minimize the Latency and to Optimize the Number of Cores in an NoC-DSP Architectures

Authors: Imen Amari, Kaouther Gasmi, Asma Rebaya, Salem Hasnaoui

Abstract:

The problem of scheduling and mapping data flow applications on multi-core architectures is notoriously difficult. This difficulty is related to the rapid evaluation of Telecommunication and multimedia systems accompanied by a rapid increase of user requirements in terms of latency, execution time, consumption, energy, etc. Having an optimal scheduling on multi-cores DSP (Digital signal Processors) platforms is a challenging task. In this context, we present a novel technic and algorithm in order to find a valid schedule that optimizes the key performance metrics particularly the Latency. Our contribution is based on Satisfiability Modulo Theories (SMT) solving technologies which is strongly driven by the industrial applications and needs. This paper, describe a scheduling module integrated in our proposed Workflow which is advised to be a successful approach for programming the applications based on NoC-DSP platforms. This workflow transform automatically a Simulink model to a synchronous dataflow (SDF) model. The automatic transformation followed by SMT solver scheduling aim to minimize the final latency and other software/hardware metrics in terms of an optimal schedule. Also, finding the optimal numbers of cores to be used. In fact, our proposed workflow taking as entry point a Simulink file (.mdl or .slx) derived from embedded Matlab functions. We use an approach which is based on the synchronous and hierarchical behavior of both Simulink and SDF. Whence, results of running the scheduler which exist in the Workflow mentioned above using our proposed SMT solver algorithm refinements produce the best possible scheduling in terms of latency and numbers of cores.

Keywords: multi-cores DSP, scheduling, SMT solver, workflow

Procedia PDF Downloads 259
2730 Geometry, the language of Manifestation of Tabriz School’s Mystical Thoughts in Architecture (Case Study: Dome of Soltanieh)

Authors: Lida Balilan, Dariush Sattarzadeh, Rana Koorepaz

Abstract:

In the Ilkhanid era, the mystical school of Tabriz manifested itself as an art school in various aspects, including miniatures, architecture, urban planning and design, simultaneously with the expansion of the many sciences of its time. In this era, mysticism, both in form and in poetry and prose, as well as in works of art reached its peak. Mysticism, as an inner belief and thought, brought the audience to the artistic and aesthetical view using allegorical and symbolic expression of the religion and had a direct impact on the formation of the intellectual and cultural layers of the society. At the same time, Mystic school of Tabriz could create a symbolic and allegorical language to create magnificent works of architecture with the expansion of science in various fields and using various sciences such as mathematics, geometry, science of numbers and by Abjad letters. In this era, geometry is the middle link between mysticism and architecture and it is divided into two categories, including intellectual and sensory geometry and based on its function. Soltaniyeh dome is one of the prominent buildings of the Tabriz school with the shrine land use. In this article, information is collected using a historical-interpretive method and the results are analyzed using an analytical-comparative method. The results of the study suggest that the designers and builders of the Soltaniyeh dome have used shapes, colors, numbers, letters and words in the form of motifs, geometric patterns as well as lines and writings in levels and layers ranging from plans to decorations and arrays for architectural symbolization and encryption to express and transmit mystical ideas.

Keywords: geometry, Tabriz school, mystical thoughts, dome of Soltaniyeh

Procedia PDF Downloads 58
2729 Usability Evaluation of Four Big e-Commerce Websites in Indonesia

Authors: Harry B. Santoso, Lia Sadita, Firlia Sandyta, Musa Alfatih, Nove Spalo, Nu'man Naufal, Nuryahya P. Utomo, Putu A. Paramatha, Rezka Aufar Leonandya, Tommy Anugrah, Aulia Chairunisa, M. Fadly Uzzaki, Riandy D. Banimahendra

Abstract:

The numbers of Internet active users in Indonesia reach out over 88.1 million, where 48% of them are daily active users. Seeing these numbers, it is the best opportunity for IT companies to grow their business, especially e-Commerce. In fact, the growth of e-Commerce companies in Indonesia is proportional with internet daily active users. This phenomenon shows that competition happening among the e-Commerce companies is raising high. It triggers many e-Commerce companies to improve their services. The authors hypothesized that one of the best ways to improve the services is by improving their usability. So, the authors had done a study to evaluate and find out ways to improve usability of those e-Commerce websites. The authors chose four e-Commerce websites which each of them has different business focus and profiles. Each company is labeled as A, B, C, and D. Company A is a fashion-based e-Commerce services with two-million desktop visits Indonesia. Company B is an international online shopping mall for everyday appliances with 48,3-million desktop visits in Indonesia. Company C is a localized online shopping mall with 3,2-million desktop visits in Indonesia. Company D is an online shopping mall with one-million desktop visits in Indonesia. Writers used popular web traffic analytics platform to gain the numbers. There are some approaches to evaluate the usability of e-Commerce websites. In this study, the authors used usability testing method supported by the User Experience Questionnaire. This method involved the user in interacting directly with the services provided by the e-Commerce company. This study was conducted within two months including preparation, data collection, data analysis, and reporting. We used a pair of computers, a screen-capture video application named Smartboard, and User Experience Questionnaire. A team was built to conduct this study. They consisted of one supervisor, two assistants, four facilitators and four observers. For each e-Commerce, three users aged 17-25 years old were invited to do five task scenarios. Data collected in this study included demographic information of the users, usability testing results, and users’ responses to the questionnaire. Some findings were revealed from the usability testing and the questionnaire. Compared to the other three companies, Company D had the least score for the experiences. One of the most painful issues figured out by the authors from the evaluation was most users claimed feeling confused by user interfaces in these e-Commerce websites. We believe that this study will help e-Commerce companies to improve their services and business in the future.

Keywords: e-commerce, evaluation, usability testing, user experience

Procedia PDF Downloads 283
2728 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: economic dispatch, gaussian selection operator, prohibited operating zones, ramp rate limits

Procedia PDF Downloads 106
2727 Chaos Fuzzy Genetic Algorithm

Authors: Mohammad Jalali Varnamkhasti

Abstract:

The genetic algorithms have been very successful in handling difficult optimization problems. The fundamental problem in genetic algorithms is premature convergence. This paper, present a new fuzzy genetic algorithm based on chaotic values instead of the random values in genetic algorithm processes. In this algorithm, for initial population is used chaotic sequences and then a new sexual selection proposed for selection mechanism. In this technique, the population is divided such that the male and female would be selected in an alternate way. The layout of the male and female chromosomes in each generation is different. A female chromosome is selected by tournament selection size from the female group. Then, the male chromosome is selected, in order of preference based on the maximum Hamming distance between the male chromosome and the female chromosome or The highest fitness value of male chromosome (if more than one male chromosome is having the maximum Hamming distance existed), or Random selection. The selections of crossover and mutation operators are achieved by running the fuzzy logic controllers, the crossover and mutation probabilities are varied on the basis of the phenotype and genotype characteristics of the chromosome population. Computational experiments are conducted on the proposed techniques and the results are compared with some other operators, heuristic and local search algorithms commonly used for solving p-median problems published in the literature.

Keywords: genetic algorithm, fuzzy system, chaos, sexual selection

Procedia PDF Downloads 361
2726 Thermodynamics of Random Copolymers in Solution

Authors: Maria Bercea, Bernhard A. Wolf

Abstract:

The thermodynamic behavior for solutions of poly (methyl methacrylate-ran-t-butyl methacrylate) of variable composition as compared with the corresponding homopolymers was investigated by light scattering measurements carried out for dilute solutions and vapor pressure measurements of concentrated solutions. The complex dependencies of the Flory Huggins interaction parameter on concentration and copolymer composition in solvents of different polarity (toluene and chloroform) can be understood by taking into account the ability of the polymers to rearrange in a response to changes in their molecular surrounding. A recent unified thermodynamic approach was used for modeling the experimental data, being able to describe the behavior of the different solutions by means of two adjustable parameters, one representing the effective number of solvent segments and another one accounting for the interactions between the components. Thus, it was investigated how the solvent quality changes with the composition of the copolymers through the Gibbs energy of mixing as a function of polymer concentration. The largest reduction of the Gibbs energy at a given composition of the system was observed for the best solvent. The present investigation proves that the new unified thermodynamic approach is a general concept applicable to homo- and copolymers, independent of the chain conformation or shape, molecular and chemical architecture of the components and of other dissimilarities, such as electrical charges.

Keywords: random copolymers, Flory Huggins interaction parameter, Gibbs energy of mixing, chemical architecture

Procedia PDF Downloads 259
2725 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible

Procedia PDF Downloads 239
2724 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 82
2723 Human-Computer Interaction: Strategies for Ensuring the Design of User-Centered Web Interfaces for Smartphones

Authors: Byron Joseph A. Hallar, Annjeannette Alain D. Galang, Maria Visitacion N. Gumabay

Abstract:

The widespread adoption and increasing proliferation of smartphones that started during the first decade of the twenty-first century have enabled their users to communicate and access information in ways that were merely thought of as possibilities in the few years before the smartphone revolution. A product of the convergence of the cellular phone and portable computer, the smartphone provides an additional important function that used to be the exclusive domain of desktop-bound computers and portable computers: Web Browsing. For increasing numbers of users, the smartphone and allied devices such as tablet computers have become their first and often their only means of accessing the World Wide Web. This has led to the development of websites that cater to the needs of the new breed of smartphone-carrying web users. The smaller size of smartphones as compared with conventional computers has provided unique challenges to web interface designers. The smaller screen size and touchscreen interface have made it much more difficult to read and navigate through web pages that were in most part designed for traditional desktop and portable computers. Although increasing numbers of websites now provide an alternate website formatted for smartphones, problems with ease of use, reliability and usability still remain. This study focuses on the identification of the problems associated with smartphone web interfaces, the compliance with accepted standards of user-oriented web interface design, the strategies that could be utilized to ensure the design of user-centric web interfaces for smartphones, and the identification of the current trends and developments related to user-centric web interface design intended for the consumption of smartphone users.

Keywords: human-computer interaction, user-centered design, web interface, mobile, smartphone

Procedia PDF Downloads 324