Search results for: Random Anisotropy Ising Model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17760

Search results for: Random Anisotropy Ising Model

17580 Low Field Microwave Absorption and Magnetic Anisotropy in TM Co-Doped ZnO System

Authors: J. Das, T. S. Mahule, V. V. Srinivasu

Abstract:

Electron spin resonance (ESR) study at 9.45 GHz and a field modulation frequency of 100Hz was performed on bulk polycrystalline samples of Mn:TM (Fe/Ni) and Mn:RE (Gd/Sm) co doped ZnO samples with composition Zn1-xMn:TM/RE)xO synthesised by solid state reaction route and sintered at 500 0C temperature. The room temperature microwave absorption data collected by sweeping the DC magnetic field from -500 to 9500 G for the Mn:Fe and Mn:Ni co doped ZnO samples exhibit a rarely reported non resonant low field absorption (NRLFA) in addition to a strong absorption at around 3350G, usually associated with ferromagnetic resonance (FMR) satisfying Larmor’s relation due to absorption in the full saturation state. Observed low field absorption is distinct to ferromagnetic resonance even at low temperature and shows hysteresis. Interestingly, it shows a phase opposite with respect to the main ESR signal of the samples, which indicates that the low field absorption has a minimum value at zero magnetic field whereas the ESR signal has a maximum value. The major resonance peak as well as the peak corresponding to low field absorption exhibit asymmetric nature indicating magnetic anisotropy in the sample normally associated with intrinsic ferromagnetism. Anisotropy parameter for Mn:Ni codoped ZnO sample is noticed to be quite higher. The g values also support the presence of oxygen vacancies and clusters in the samples. These samples have shown room temperature ferromagnetism in the SQUID measurement. However, in rare earth (RE) co doped samples (Zn1-x (Mn: Gd/Sm)xO), which show paramagnetic behavior at room temperature, the low field microwave signals are not observed. As microwave currents due to itinerary electrons can lead to ohmic losses inside the sample, we speculate that more delocalized 3d electrons contributed from the TM dopants facilitate such microwave currents leading to the loss and hence absorption at the low field which is also supported by the increase in current with increased micro wave power. Besides, since Fe and Ni has intrinsic spin polarization with polarisability of around 45%, doping of Fe and Ni is expected to enhance the spin polarization related effect in ZnO. We emphasize that in this case Fe and Ni doping contribute to polarized current which interacts with the magnetization (spin) vector and get scattered giving rise to the absorption loss.

Keywords: co-doping, electron spin resonance, hysteresis, non-resonant microwave absorption

Procedia PDF Downloads 288
17579 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms

Authors: Rikson Gultom

Abstract:

Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.

Keywords: abusive language, hate speech, machine learning, optimization, social media

Procedia PDF Downloads 101
17578 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 124
17577 Factors Affecting the Ultimate Compressive Strength of the Quaternary Calcarenites, North Western Desert, Egypt

Authors: M. A. Rashed, A. S. Mansour, H. Faris, W. Afify

Abstract:

The calcarenites carbonate rocks of the Quaternary ridges, which extend along the northwestern Mediterranean coastal plain of Egypt, represent an excellent model for the transformation of loose sediments to real sedimentary rocks by the different stages of meteoric diagenesis. The depositional and diagenetic fabrics of the rocks, in addition to the strata orientation, highly affect their ultimate compressive strength and other geotechnical properties. There is a marked increase in the compressive strength (UCS) from the first to the fourth ridge rock samples. The lowest values are related to the loose packing, weakly cemented aragonitic ooid sediments with high porosity, besides the irregularly distributed of cement, which result in decreasing the ability of these rocks to withstand crushing under direct pressure. The high (UCS) values are attributed to the low porosity, the presence of micritic cement, the reduction in grain size and the occurrence of micritization and calcretization processes. The strata orientation has a notable effect on the measured (UCS). The lowest values have been recorded for the samples cored in the inclined direction; whereas the highest values have been noticed in most samples cored in the vertical and parallel directions to bedding plane. In case of the inclined direction, the bedding planes were oriented close to the plane of maximum shear stress. The lowest and highest anisotropy values have been recorded for the first and the third ridges rock samples, respectively, which may attributed to the relatively homogeneity and well sorted grain-stone of the first ridge rock samples, and relatively heterogeneity in grain and pore size distribution and degree of cementation of the third ridge rock samples, besides, the abundance of shell fragments with intra-particle pore spaces, which may produce lines of weakness within the rock.

Keywords: compressive strength, anisotropy, calcarenites, Egypt

Procedia PDF Downloads 347
17576 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model

Authors: Zina Benouaret, Djamil Aissani

Abstract:

In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.

Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis

Procedia PDF Downloads 226
17575 Programming with Grammars

Authors: Peter M. Maurer Maurer

Abstract:

DGL is a context free grammar-based tool for generating random data. Many types of simulator input data require some computation to be placed in the proper format. For example, it might be necessary to generate ordered triples in which the third element is the sum of the first two elements, or it might be necessary to generate random numbers in some sorted order. Although DGL is universal in computational power, generating these types of data is extremely difficult. To overcome this problem, we have enhanced DGL to include features that permit direct computation within the structure of a context free grammar. The features have been implemented as special types of productions, preserving the context free flavor of DGL specifications.

Keywords: DGL, Enhanced Context Free Grammars, Programming Constructs, Random Data Generation

Procedia PDF Downloads 119
17574 Assessing Functional Structure in European Marine Ecosystems Using a Vector-Autoregressive Spatio-Temporal Model

Authors: Katyana A. Vert-Pre, James T. Thorson, Thomas Trancart, Eric Feunteun

Abstract:

In marine ecosystems, spatial and temporal species structure is an important component of ecosystems’ response to anthropological and environmental factors. Although spatial distribution patterns and fish temporal series of abundance have been studied in the past, little research has been allocated to the joint dynamic spatio-temporal functional patterns in marine ecosystems and their use in multispecies management and conservation. Each species represents a function to the ecosystem, and the distribution of these species might not be random. A heterogeneous functional distribution will lead to a more resilient ecosystem to external factors. Applying a Vector-Autoregressive Spatio-Temporal (VAST) model for count data, we estimate the spatio-temporal distribution, shift in time, and abundance of 140 species of the Eastern English Chanel, Bay of Biscay and Mediterranean Sea. From the model outputs, we determined spatio-temporal clusters, calculating p-values for hierarchical clustering via multiscale bootstrap resampling. Then, we designed a functional map given the defined cluster. We found that the species distribution within the ecosystem was not random. Indeed, species evolved in space and time in clusters. Moreover, these clusters remained similar over time deriving from the fact that species of a same cluster often shifted in sync, keeping the overall structure of the ecosystem similar overtime. Knowing the co-existing species within these clusters could help with predicting data-poor species distribution and abundance. Further analysis is being performed to assess the ecological functions represented in each cluster.

Keywords: cluster distribution shift, European marine ecosystems, functional distribution, spatio-temporal model

Procedia PDF Downloads 166
17573 A New Concept for Deriving the Expected Value of Fuzzy Random Variables

Authors: Liang-Hsuan Chen, Chia-Jung Chang

Abstract:

Fuzzy random variables have been introduced as an imprecise concept of numeric values for characterizing the imprecise knowledge. The descriptive parameters can be used to describe the primary features of a set of fuzzy random observations. In fuzzy environments, the expected values are usually represented as fuzzy-valued, interval-valued or numeric-valued descriptive parameters using various metrics. Instead of the concept of area metric that is usually adopted in the relevant studies, the numeric expected value is proposed by the concept of distance metric in this study based on two characters (fuzziness and randomness) of FRVs. Comparing with the existing measures, although the results show that the proposed numeric expected value is same with those using the different metric, if only triangular membership functions are used. However, the proposed approach has the advantages of intuitiveness and computational efficiency, when the membership functions are not triangular types. An example with three datasets is provided for verifying the proposed approach.

Keywords: fuzzy random variables, distance measure, expected value, descriptive parameters

Procedia PDF Downloads 315
17572 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 187
17571 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 243
17570 Radio Frequency Identification Encryption via Modified Two Dimensional Logistic Map

Authors: Hongmin Deng, Qionghua Wang

Abstract:

A modified two dimensional (2D) logistic map based on cross feedback control is proposed. This 2D map exhibits more random chaotic dynamical properties than the classic one dimensional (1D) logistic map in the statistical characteristics analysis. So it is utilized as the pseudo-random (PN) sequence generator, where the obtained real-valued PN sequence is quantized at first, then applied to radio frequency identification (RFID) communication system in this paper. This system is experimentally validated on a cortex-M0 development board, which shows the effectiveness in key generation, the size of key space and security. At last, further cryptanalysis is studied through the test suite in the National Institute of Standards and Technology (NIST).

Keywords: chaos encryption, logistic map, pseudo-random sequence, RFID

Procedia PDF Downloads 376
17569 Effect of the Tidal Charge Parameter on CMBR Temperature Anisotropies

Authors: Evariste Boj, Jan Schee

Abstract:

We present the temperature anisotropy of the cosmic microwave background radiation due to the inhomogeneity region constructed on a 3-brane in the framework of a Randall-Sundrum one brane immersed into a 5D bulk $AdS_5$ spacetime. We employ the Brane-World Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmological model to describe the cosmic expansion on the brane. The inhomogeneity is modeled by the static, spherically symmetric spacetime that replaces the spherically symmetric part of a dust-filled universe and is connected to the FLRW spacetime through the junction conditions. As the vacuum region expands it induces an additional frequency shift to a CMBR photon passing through this inhomogeneity in comparison to the case of a photon propagating through a pure FLRW spacetime. This frequency shift is associated with the effective temperature change of the CMBR in the corresponding direction. We give an estimate of the CMBR effective temperature changes with the change of the value of the tidal charge parameter.

Keywords: CMBR, Randall-Sundrum model, Rees-Sciama effect, Braneworld

Procedia PDF Downloads 189
17568 Solving Process Planning, Weighted Apparent Tardiness Cost Dispatching, and Weighted Processing plus Weight Due-Date Assignment Simultaneously Using a Hybrid Search

Authors: Halil Ibrahim Demir, Caner Erden, Abdullah Hulusi Kokcam, Mumtaz Ipek

Abstract:

Process planning, scheduling, and due date assignment are three important manufacturing functions which are studied independently in literature. There are hundreds of works on IPPS and SWDDA problems but a few works on IPPSDDA problem. Integrating these three functions is very crucial due to the high relationship between them. Since the scheduling problem is in the NP-Hard problem class without any integration, an integrated problem is even harder to solve. This study focuses on the integration of these functions. Sum of weighted tardiness, earliness, and due date related costs are used as a penalty function. Random search and hybrid metaheuristics are used to solve the integrated problem. Marginal improvement in random search is very high in the early iterations and reduces enormously in later iterations. At that point directed search contribute to marginal improvement more than random search. In this study, random and genetic search methods are combined to find better solutions. Results show that overall performance becomes better as the integration level increases.

Keywords: process planning, genetic algorithm, hybrid search, random search, weighted due-date assignment, weighted scheduling

Procedia PDF Downloads 337
17567 Design and Validation of Cutting Performance of Ceramic Matrix Composites Using FEM Simulations

Authors: Zohaib Ellahi, Guolong Zhao

Abstract:

Ceramic matrix composite (CMC) material possesses high strength, wear resistance and anisotropy thus machining of this material is very difficult and demands high cost. In this research, FEM simulations and physical experiments have been carried out to assess the machinability of carbon fiber reinforced silicon carbide (C/SiC) using polycrystalline diamond (PCD) tool in slot milling process. Finite element model has been generated in Abaqus/CAE software and milling operation performed by using user defined material subroutine. Effect of different milling parameters on cutting forces and stresses has been calculated through FEM simulations and compared with experimental results to validate the finite element model. Cutting forces in x and y-direction were calculated through both experiments and finite element model and found a good agreement between them. With increase in cutting speed resultant cutting forces are decreased. Resultant cutting forces are increased with increased feed per tooth and depth of cut. When machining performed along the fiber direction stresses generated near the tool edge were minimum and increases with fiber cutting angle.

Keywords: experimental & numerical investigation, C/SiC cutting performance analysis, milling of CMCs, CMC composite stress analysis

Procedia PDF Downloads 57
17566 House Price Index Predicts a Larger Impact of Habitat Loss than Primary Productivity on the Biodiversity of North American Avian Communities

Authors: Marlen Acosta Alamo, Lisa Manne, Richard Veit

Abstract:

Habitat loss due to land use change is one of the leading causes of biodiversity loss worldwide. This form of habitat loss is a non-random phenomenon since the same environmental factors that make an area suitable for supporting high local biodiversity overlap with those that make it attractive for urban development. We aimed to compare the effect of two non-random habitat loss predictors on the richness, abundance, and rarity of nature-affiliated and human-affiliated North American breeding birds. For each group of birds, we simulated the non-random habitat loss using two predictors: the House Price Index as a measure of the attractiveness of an area for humans and the Normalized Difference Vegetation Index as a proxy for primary productivity. We compared the results of the two non-random simulation sets and one set of random habitat loss simulations using an analysis of variance and followed up with a Tukey-Kramer test when appropriate. The attractiveness of an area for humans predicted estimates of richness loss and increase of rarity higher than primary productivity and random habitat loss for nature-affiliated and human-affiliated birds. For example, at 50% of habitat loss, the attractiveness of an area for humans produced estimates of richness at least 5% lower and of a rarity at least 40% higher than primary productivity and random habitat loss for both groups of birds. Only for the species abundance of nature-affiliated birds, the attractiveness of an area for humans did not outperform primary productivity as a predictor of biodiversity following habitat loss. We demonstrated the value of the House Price Index, which can be used in conservation assessments as an index of the risks of habitat loss for natural communities. Thus, our results have relevant implications for sustainable urban land-use planning practices and can guide stakeholders and developers in their efforts to conserve local biodiversity.

Keywords: biodiversity loss, bird biodiversity, house price index, non-random habitat loss

Procedia PDF Downloads 55
17565 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 116
17564 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 103
17563 Rounding Technique's Application in Schnorr Signature Algorithm: Known Partially Most Significant Bits of Nonce

Authors: Wenjie Qin, Kewei Lv

Abstract:

In 1996, Boneh and Venkatesan proposed the Hidden Number Problem (HNP) and proved the most significant bits (MSB) of computational Diffie-Hellman key exchange scheme and related schemes are unpredictable bits. They also gave a method which is a lattice rounding technique to solve HNP in non-uniform model. In this paper, we put forward a new concept that is Schnorr-MSB-HNP. We also reduce the problem of solving Schnorr signature private key with a few consecutive most significant bits of random nonce (used at each signature generation) to Schnorr-MSB-HNP, then we use the rounding technique to solve the Schnorr-MSB-HNP. We have come to the conclusion that if there is a ‘miraculous box’ which inputs the random nonce and outputs 2loglogq (q is a prime number) most significant bits of nonce, the signature private key will be obtained by choosing 2logq signature messages randomly. Thus we get an attack on the Schnorr signature private key.

Keywords: rounding technique, most significant bits, Schnorr signature algorithm, nonce, Schnorr-MSB-HNP

Procedia PDF Downloads 201
17562 Effect of Compaction Method on the Mechanical and Anisotropic Properties of Asphalt Mixtures

Authors: Mai Sirhan, Arieh Sidess

Abstract:

Asphaltic mixture is a heterogeneous material composed of three main components: aggregates; bitumen and air voids. The professional experience and scientific literature categorize asphaltic mixture as a viscoelastic material, whose behavior is determined by temperature and loading rate. Properties characterization of the asphaltic mixture used under the service conditions is done by compacting and testing cylindric asphalt samples in the laboratory. These samples must resemble in a high degree internal structure of the mixture achieved in service, and the mechanical characteristics of the compacted asphalt layer in the pavement. The laboratory samples are usually compacted in temperatures between 140 and 160 degrees Celsius. In this temperature range, the asphalt has a low degree of strength. The laboratory samples are compacted using the dynamic or vibrational compaction methods. In the compaction process, the aggregates tend to align themselves in certain directions that lead to anisotropic behavior of the asphaltic mixture. This issue has been studied in the Strategic Highway Research Program (SHRP) research, that recommended using the gyratory compactor based on the assumption that this method is the best in mimicking the compaction in the service. In Israel, the Netivei Israel company is considering adopting the Gyratory Method as a replacement for the Marshall method used today. Therefore, the compatibility of the Gyratory Method for the use with Israeli asphaltic mixtures should be investigated. In this research, we aimed to examine the impact of the compaction method used on the mechanical characteristics of the asphaltic mixtures and to evaluate the degree of anisotropy in relation to the compaction method. In order to carry out this research, samples have been compacted in the vibratory and gyratory compactors. These samples were cylindrically cored both vertically (compaction wise) and horizontally (perpendicular to compaction direction). These models were tested under dynamic modulus and permanent deformation tests. The comparable results of the tests proved that: (1) specimens compacted by the vibratory compactor had higher dynamic modulus values than the specimens compacted by the gyratory compactor (2) both vibratory and gyratory compacted specimens had anisotropic behavior, especially in high temperatures. Also, the degree of anisotropy is higher in specimens compacted by the gyratory method. (3) Specimens compacted by the vibratory method that were cored vertically had the highest resistance to rutting. On the other hand, specimens compacted by the vibratory method that were cored horizontally had the lowest resistance to rutting. Additionally (4) these differences between the different types of specimens rise mainly due to the different internal arrangement of aggregates resulting from the compaction method. (5) Based on the initial prediction of the performance of the flexible pavement containing an asphalt layer having characteristics based on the results achieved in this research. It can be concluded that there is a significant impact of the compaction method and the degree of anisotropy on the strains that develop in the pavement, and the resistance of the pavement to fatigue and rutting defects.

Keywords: anisotropy, asphalt compaction, dynamic modulus, gyratory compactor, mechanical properties, permanent deformation, vibratory compactor

Procedia PDF Downloads 97
17561 Design and Implementation of Pseudorandom Number Generator Using Android Sensors

Authors: Mochamad Beta Auditama, Yusuf Kurniawan

Abstract:

A smartphone or tablet require a strong randomness to establish secure encrypted communication, encrypt files, etc. Therefore, random number generation is one of the main keys to provide secrecy. Android devices are equipped with hardware-based sensors, such as accelerometer, gyroscope, etc. Each of these sensors provides a stochastic process which has a potential to be used as an extra randomness source, in addition to /dev/random and /dev/urandom pseudorandom number generators. Android sensors can provide randomness automatically. To obtain randomness from Android sensors, each one of Android sensors shall be used to construct an entropy source. After all entropy sources are constructed, output from these entropy sources are combined to provide more entropy. Then, a deterministic process is used to produces a sequence of random bits from the combined output. All of these processes are done in accordance with NIST SP 800-22 and the series of NIST SP 800-90. The operation conditions are done 1) on Android user-space, and 2) the Android device is placed motionless on a desk.

Keywords: Android hardware-based sensor, deterministic process, entropy source, random number generation/generators

Procedia PDF Downloads 341
17560 Predictive Models of Ruin Probability in Retirement Withdrawal Strategies

Authors: Yuanjin Liu

Abstract:

Retirement withdrawal strategies are very important to minimize the probability of ruin in retirement. The ruin probability is modeled as a function of initial withdrawal age, gender, asset allocation, inflation rate, and initial withdrawal rate. The ruin probability is obtained based on the 2019 period life table for the Social Security, IRS Required Minimum Distribution (RMD) Worksheets, US historical bond and equity returns, and inflation rates using simulation. Several popular machine learning algorithms of the generalized additive model, random forest, support vector machine, extreme gradient boosting, and artificial neural network are built. The model validation and selection are based on the test errors using hyperparameter tuning and train-test split. The optimal model is recommended for retirees to monitor the ruin probability. The optimal withdrawal strategy can be obtained based on the optimal predictive model.

Keywords: ruin probability, retirement withdrawal strategies, predictive models, optimal model

Procedia PDF Downloads 42
17559 Parameter Estimation for Contact Tracing in Graph-Based Models

Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar

Abstract:

We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.

Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference

Procedia PDF Downloads 48
17558 Continuous Processing Approaches for Tunable Asymmetric Photochemical Synthesis

Authors: Amanda C. Evans

Abstract:

Enabling technologies such as continuous processing (CP) approaches can provide the tools needed to control and manipulate reactivities and transform chemical reactions into micro-controlled in-flow processes. Traditional synthetic approaches can be radically transformed by the application of CP, facilitating the pairing of chemical methodologies with technologies from other disciplines. CP supports sustainable processes that controllably generate reaction specificity utilizing supramolecular interactions. Continuous photochemical processing is an emerging field of investigation. The use of light to drive chemical reactivity is not novel, but the controlled use of specific and tunable wavelengths of light to selectively generate molecular structure under continuous processing conditions is an innovative approach towards chemical synthesis. This investigation focuses on the use of circularly polarized (cp) light as a sustainable catalyst for the CP generation of asymmetric molecules. Chiral photolysis has already been achieved under batch, solid-phase conditions: using synchrotron-sourced cp light, asymmetric photolytic selectivities of up to 4.2% enantiomeric excess (e.e.) have been reported. In order to determine the optimal wavelengths to use for irradiation with cp light for any given molecular building block, CD and anisotropy spectra for each building block of interest have been generated in two different solvents (water, hexafluoroisopropanol) across a range of wavelengths (130-400 nm). These spectra are being used to support a series of CP experiments using cp light to generate enantioselectivity.

Keywords: anisotropy, asymmetry, flow chemistry, active pharmaceutical ingredients

Procedia PDF Downloads 129
17557 Optimism and Entrepreneurial Intentions: The Mediating Role of Emotional Intelligence

Authors: Neta Kela Madar, Tali Teeni-Harari, Tamar Icekson, Yaron Sela

Abstract:

This paper proposes and empirically tests a theoretical model positing relationships between dispositional optimism, emotional intelligence, and entrepreneurial intention. To author's best knowledge, this study examined for the first time the role of dispositional optimism together with emotional intelligence as predictors of entrepreneurial intentions. The study findings suggest that optimism may increase entrepreneurial intentions indirectly by enhancing emotional intelligence/ model formulation is based on a random survey of students (N= 227). Model parameter estimation was supported by Structural Equation Modeling (SEM). Results indicate that students’ optimism and emotional intelligence are associated with increased levels of entrepreneurial intention. Additionally, the present study argues that emotional intelligence mediates the positive relationship between optimism and entrepreneurial intention. Theoretical and practical implications of this model are discussed.

Keywords: entrepreneurial intentions, emotional intelligence, optimism, dispositional optimism

Procedia PDF Downloads 186
17556 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 321
17555 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution

Procedia PDF Downloads 229
17554 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 99
17553 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 208
17552 Random Matrix Theory Analysis of Cross-Correlation in the Nigerian Stock Exchange

Authors: Chimezie P. Nnanwa, Thomas C. Urama, Patrick O. Ezepue

Abstract:

In this paper we use Random Matrix Theory to analyze the eigen-structure of the empirical correlations of 82 stocks which are consistently traded in the Nigerian Stock Exchange (NSE) over a 4-year study period 3 August 2009 to 26 August 2013. We apply the Marchenko-Pastur distribution of eigenvalues of a purely random matrix to investigate the presence of investment-pertinent information contained in the empirical correlation matrix of the selected stocks. We use hypothesised standard normal distribution of eigenvector components from RMT to assess deviations of the empirical eigenvectors to this distribution for different eigenvalues. We also use the Inverse Participation Ratio to measure the deviation of eigenvectors of the empirical correlation matrix from RMT results. These preliminary results on the dynamics of asset price correlations in the NSE are important for improving risk-return trade-offs associated with Markowitz’s portfolio optimization in the stock exchange, which is pursued in future work.

Keywords: correlation matrix, eigenvalue and eigenvector, inverse participation ratio, portfolio optimization, random matrix theory

Procedia PDF Downloads 311
17551 A Prediction Method for Large-Size Event Occurrences in the Sandpile Model

Authors: S. Channgam, A. Sae-Tang, T. Termsaithong

Abstract:

In this research, the occurrences of large size events in various system sizes of the Bak-Tang-Wiesenfeld sandpile model are considered. The system sizes (square lattice) of model considered here are 25×25, 50×50, 75×75 and 100×100. The cross-correlation between the ratio of sites containing 3 grain time series and the large size event time series for these 4 system sizes are also analyzed. Moreover, a prediction method of the large-size event for the 50×50 system size is also introduced. Lastly, it can be shown that this prediction method provides a slightly higher efficiency than random predictions.

Keywords: Bak-Tang-Wiesenfeld sandpile model, cross-correlation, avalanches, prediction method

Procedia PDF Downloads 352