Search results for: sequential dependence model
17687 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 9817686 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 39317685 Critical Behaviour and Filed Dependence of Magnetic Entropy Change in K Doped Manganites Pr₀.₈Na₀.₂−ₓKₓMnO₃ (X = .10 And .15)
Authors: H. Ben Khlifa, W. Cheikhrouhou-Koubaa, A. Cheikhrouhou
Abstract:
The orthorhombic Pr₀.₈Na₀.₂−ₓKₓMnO₃ (x = 0.10 and 0.15) manganites are prepared by using the solid-state reaction at high temperatures. The critical exponents (β, γ, δ) are investigated through various techniques such as modified Arrott plot, Kouvel-Fisher method, and critical isotherm analysis based on the data of the magnetic measurements recorded around the Curie temperature. The critical exponents are derived from the magnetization data using the Kouvel-Fisher method, are found to be β = 0.32(4) and γ = 1.29(2) at TC ~ 123 K for x = 0.10 and β = 0.31(1) and γ = 1.25(2) at TC ~ 133 K for x = 0.15. The critical exponent values obtained for both samples are comparable to the values predicted by the 3D-Ising model and have also been verified by the scaling equation of state. Such results demonstrate the existence of ferromagnetic short-range order in our materials. The magnetic entropy changes of polycrystalline samples with a second-order phase transition are investigated. A large magnetic entropy change deduced from isothermal magnetization curves, is observed in our samples with a peak centered on their respective Curie temperatures (TC). The field dependence of the magnetic entropy changes are analyzed, which shows power-law dependence ΔSmax ≈ a(μ0 H)n at the transition temperature. The values of n obey the Curie Weiss law above the transition temperature. It is shown that for the investigated materials, the magnetic entropy change follows a master curve behavior. The rescaled magnetic entropy change curves for different applied fields collapse onto a single curve for both samples.Keywords: manganites, critical exponents, magnetization, magnetocaloric, master curve
Procedia PDF Downloads 16417684 Breast Cancer Diagnosing Based on Online Sequential Extreme Learning Machine Approach
Authors: Musatafa Abbas Abbood Albadr, Masri Ayob, Sabrina Tiun, Fahad Taha Al-Dhief, Mohammad Kamrul Hasan
Abstract:
Breast Cancer (BC) is considered one of the most frequent reasons of cancer death in women between 40 to 55 ages. The BC is diagnosed by using digital images of the FNA (Fine Needle Aspirate) for both benign and malignant tumors of the breast mass. Therefore, this work proposes the Online Sequential Extreme Learning Machine (OSELM) algorithm for diagnosing BC by using the tumor features of the breast mass. The current work has used the Wisconsin Diagnosis Breast Cancer (WDBC) dataset, which contains 569 samples (i.e., 357 samples for benign class and 212 samples for malignant class). Further, numerous measurements of assessment were used in order to evaluate the proposed OSELM algorithm, such as specificity, precision, F-measure, accuracy, G-mean, MCC, and recall. According to the outcomes of the experiment, the highest performance of the proposed OSELM was accomplished with 97.66% accuracy, 98.39% recall, 95.31% precision, 97.25% specificity, 96.83% F-measure, 95.00% MCC, and 96.84% G-Mean. The proposed OSELM algorithm demonstrates promising results in diagnosing BC. Besides, the performance of the proposed OSELM algorithm was superior to all its comparatives with respect to the rate of classification.Keywords: breast cancer, machine learning, online sequential extreme learning machine, artificial intelligence
Procedia PDF Downloads 11317683 Brainbow Image Segmentation Using Bayesian Sequential Partitioning
Authors: Yayun Hsu, Henry Horng-Shing Lu
Abstract:
This paper proposes a data-driven, biology-inspired neural segmentation method of 3D drosophila Brainbow images. We use Bayesian Sequential Partitioning algorithm for probabilistic modeling, which can be used to detect somas and to eliminate cross talk effects. This work attempts to develop an automatic methodology for neuron image segmentation, which nowadays still lacks a complete solution due to the complexity of the image. The proposed method does not need any predetermined, risk-prone thresholds since biological information is inherently included in the image processing procedure. Therefore, it is less sensitive to variations in neuron morphology; meanwhile, its flexibility would be beneficial for tracing the intertwining structure of neurons.Keywords: brainbow, 3D imaging, image segmentation, neuron morphology, biological data mining, non-parametric learning
Procedia PDF Downloads 48717682 GPU Accelerated Fractal Image Compression for Medical Imaging in Parallel Computing Platform
Authors: Md. Enamul Haque, Abdullah Al Kaisan, Mahmudur R. Saniat, Aminur Rahman
Abstract:
In this paper, we have implemented both sequential and parallel version of fractal image compression algorithms using CUDA (Compute Unified Device Architecture) programming model for parallelizing the program in Graphics Processing Unit for medical images, as they are highly similar within the image itself. There is several improvements in the implementation of the algorithm as well. Fractal image compression is based on the self similarity of an image, meaning an image having similarity in majority of the regions. We take this opportunity to implement the compression algorithm and monitor the effect of it using both parallel and sequential implementation. Fractal compression has the property of high compression rate and the dimensionless scheme. Compression scheme for fractal image is of two kinds, one is encoding and another is decoding. Encoding is very much computational expensive. On the other hand decoding is less computational. The application of fractal compression to medical images would allow obtaining much higher compression ratios. While the fractal magnification an inseparable feature of the fractal compression would be very useful in presenting the reconstructed image in a highly readable form. However, like all irreversible methods, the fractal compression is connected with the problem of information loss, which is especially troublesome in the medical imaging. A very time consuming encoding process, which can last even several hours, is another bothersome drawback of the fractal compression.Keywords: accelerated GPU, CUDA, parallel computing, fractal image compression
Procedia PDF Downloads 33617681 The Effect of Meta-Cognitive Therapy on Meta-Cognitive Defects and Emotional Regulation in Substance Dependence Patients
Authors: Sahra Setorg
Abstract:
The purpose of this study was to determine the effect of meta-cognitive therapy on meta-cognitive defects and emotional regulation in industrial substance dependence patients. This quasi-experimental research was conducted with post-test and two-month follow-up design with control and experimental groups. The statistical population consisted of all industrial Substance dependence patients refer to addictive withdrawal clinics in Esfahan city, in Iran in 2013. 45 patients were selected from three clinics through the convenience sampling method and were randomly divided into two experimental groups (15 crack dependences, 15 amphetamine dependences) and one control group (n=15). The meta-cognitive questionnaire (MCQ) and difficulties in emotional regulation questionnaire (DERS) were used as pre-test measures and the experimental groups (crack and amphetamine) received 8 MC therapy sessions in groups. The data were analyzed via multivariate covariance statistic method by spss-18. The results showed that MCT had a significant effect in improving the meta-cognitive defects in crack and amphetamine dependences. Also, this therapy can increase the emotional regulation in both groups (p<0/05).The effect of this therapy is confirmed in two months followup. According to these findings, met-cognitive is as an interface and important variable in prevention, control, and treatment of the new industrial substance dependences.Keywords: meta-cognitive therapy, meta-cognitive defects, emotional regulation, substance dependence disorder
Procedia PDF Downloads 51417680 A Comparative Psychological Interventional Study of Nicotine Dependence in Schizophrenic Patients
Authors: S. Madhusudhan, G. V. Vaniprabha
Abstract:
Worldwide statistics have shown that smoking contributes significantly to mortality, with nicotine, being more addictive. Smoking causes more than 7,00,000 deaths/year in India. Compared to the general population, the prevalence of smoking is found to be much higher among people with psychotic disorders and, more so in schizophrenia. Schizophrenic patients who smoke tend to have higher frequency of heavy smoking, with rates ranging from 60% to as high as 80%. Hence, smokers with psychiatric disorders suffer higher rates of morbidity and mortality secondary to smoking related illnesses.Keywords: brief intervention, nicotine dependence, schizophrenia
Procedia PDF Downloads 38517679 Two Quasiparticle Rotor Model for Deformed Nuclei
Authors: Alpana Goel, Kawalpreet Kalra
Abstract:
The study of level structures of deformed nuclei is the most complex topic in nuclear physics. For the description of level structure, a simple model is good enough to bring out the basic features which may then be further refined. The low lying level structures of these nuclei can, therefore, be understood in terms of Two Quasiparticle plus axially symmetric Rotor Model (TQPRM). The formulation of TQPRM for deformed nuclei has been presented. The analysis of available experimental data on two quasiparticle rotational bands of deformed nuclei present unusual features like signature dependence, odd-even staggering, signature inversion and signature reversal in two quasiparticle rotational bands of deformed nuclei. These signature effects are well discussed within the framework of TQPRM. The model is well efficient in reproducing the large odd-even staggering and anomalous features observed in even-even and odd-odd deformed nuclei. The effect of particle-particle and the Coriolis coupling is well established from the model. Detailed description of the model with implications to deformed nuclei is presented in the paper.Keywords: deformed nuclei, signature effects, signature inversion, signature reversal
Procedia PDF Downloads 15817678 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources
Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy
Abstract:
This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.Keywords: big bang big crunch, distributed generation, load control, optimization, planning
Procedia PDF Downloads 34717677 Detection of Chaos in General Parametric Model of Infectious Disease
Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari
Abstract:
Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test
Procedia PDF Downloads 32617676 Markov Characteristics of the Power Line Communication Channels in China
Authors: Ming-Yue Zhai
Abstract:
Due to the multipath and pulse noise nature, power line communications(PLC) channel can be modelled as a memory one with the finite states Markov model(FSMC). As the most important parameter modelling a Markov channel,the memory order in an FSMC is not solved in PLC systems yet. In the paper, the mutual information is used as a measure of the dependence between the different symbols, treated as the received SNA or amplitude of the current channel symbol or that of previous symbols. The joint distribution probabilities of the envelopes in PLC systems are computed based on the multi-path channel model, which is commonly used in PLC. we confirm that given the information of the symbol immediately preceding the current one, any other previous symbol is independent of the current one in PLC systems, which means the PLC channels is a Markov chain with the first-order. The field test is also performed to model the received OFDM signals with the help of AR model. The results show that the first-order AR model is enough to model the fading channel in PLC systems, which means the amount of uncertainty remaining in the current symbol should be negligible, given the information corresponding to the immediately preceding one.Keywords: power line communication, channel model, markovian, information theory, first-order
Procedia PDF Downloads 41417675 Modeling Stream Flow with Prediction Uncertainty by Using SWAT Hydrologic and RBNN Neural Network Models for Agricultural Watershed in India
Authors: Ajai Singh
Abstract:
Simulation of hydrological processes at the watershed outlet through modelling approach is essential for proper planning and implementation of appropriate soil conservation measures in Damodar Barakar catchment, Hazaribagh, India where soil erosion is a dominant problem. This study quantifies the parametric uncertainty involved in simulation of stream flow using Soil and Water Assessment Tool (SWAT), a watershed scale model and Radial Basis Neural Network (RBNN), an artificial neural network model. Both the models were calibrated and validated based on measured stream flow and quantification of the uncertainty in SWAT model output was assessed using ‘‘Sequential Uncertainty Fitting Algorithm’’ (SUFI-2). Though both the model predicted satisfactorily, but RBNN model performed better than SWAT with R2 and NSE values of 0.92 and 0.92 during training, and 0.71 and 0.70 during validation period, respectively. Comparison of the results of the two models also indicates a wider prediction interval for the results of the SWAT model. The values of P-factor related to each model shows that the percentage of observed stream flow values bracketed by the 95PPU in the RBNN model as 91% is higher than the P-factor in SWAT as 87%. In other words the RBNN model estimates the stream flow values more accurately and with less uncertainty. It could be stated that RBNN model based on simple input could be used for estimation of monthly stream flow, missing data, and testing the accuracy and performance of other models.Keywords: SWAT, RBNN, SUFI 2, bootstrap technique, stream flow, simulation
Procedia PDF Downloads 37117674 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials
Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar
Abstract:
Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.Keywords: body armor, impact resistance, Kevlar, shear thickening fluid
Procedia PDF Downloads 24217673 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 18117672 Spectral Analysis Applied to Variables of Oil Wells Profiling
Authors: Suzana Leitão Russo, Mayara Laysa de Oliveira Silva, José Augusto Andrade Filho, Vitor Hugo Simon
Abstract:
Currently, seismic methods and prospecting methods are commonly applied in the oil industry and, according to the information reported every day; oil is a source of non-renewable energy. It is easier to understand why the ownership of areas of oil extraction is coveted by many nations. It is necessary to think about ways that will enable the maximization of oil production. The technique of spectral analysis can be used to analyze the behavior of the variables already defined in oil well the profile. The main objective is to verify the series dependence of variables, and to model the variables using the frequency domain to observe the model residuals.Keywords: oil, well, spectral analysis, oil extraction
Procedia PDF Downloads 53517671 How to Modernise the ECN
Authors: Dorota Galeza
Abstract:
This paper argues that networks, such as the ECN and the American network, are affected by certain small events which are inherent to path dependence and preclude the full evolution towards efficiency. It is advocated that the American network is superior to the ECN in many respects due to its greater flexibility and longer history. This stems in particular from the creation of the American network, which was based on a small number of cases. Such structure encourages further changes and modifications which are not necessarily radical. The ECN, by contrast, was established by legislative action, which explains its rigid structure and resistance to change. It might be the case that the ECN is subject not so much to path dependence but to past dependence. It might have to be replaced, as happened to its predecessor. This paper is an attempt to transpose the superiority of the American network on to the ECN. It looks at concepts such as judicial cooperation, harmonization of procedure, peer review and regulatory impact assessments (RIAs), and dispute resolution procedures. The aim is to adopt these concepts into the EU setting without recourse to legal transplantation. The major difficulty is that many of these concepts have been tested only in the US and it is difficult to tell whether they could be modified to meet EU standards. Concepts such as judicial cooperation might be difficult due to different language traditions in EU member states. It is hoped that greater flexibility, as in the American network, would boost legitimacy and transparency.Keywords: ECN, networks, regulation, competition
Procedia PDF Downloads 43017670 Investigation of the Jupiter’s Galilean Moons
Authors: Revaz Chigladze
Abstract:
The purpose of the research is to investigate the surfaces of Jupiter's Galilean moons, namely which moon has the most uniform surface among them, what is the difference between the front (in the direction of motion) and the back sides of each moon's surface, as well as the temporal variations of the moons. Since 1981, the E. Kharadze National Astrophysical Observatory of Georgia has been conducting polarimetric (P) and photometric (M) observations of Jupiter's Galilean moons with telescopes of different diameters (40 cm and 125 cm) and the polarimeter ASEP-78 in combination with them and the latest generation photometer with a polarimeter and modern light receiver SBIG. As it turns out from the analysis of the observed material, the parameters P and M depend on α-the phase angle of the moon (satellite), L- the orbital latitude of the moon (satellite), λ- the wavelength, and t - the period of observation, i.e., P = P (α, L, λ , t), and similarly M = M (α, L, λ. , t). Based on the analysis of the observed material, the following was studied: Jupiter's Galilean moons: dependence of the magnitude and phase angle of the degree of linear polarization for different wavelengths; Dependence of the degree of polarization and the orbital longitude; dependence between the magnitude of the degree of polarization and the wavelength; time dependence of the degree of polarization and the dependence between photometric and polarimetric characteristics (including establishing correlation). From the analysis of the obtained results, we get: The magnitude of the degree of polarization of Jupiter's Galilean moons near the opposition significantly differs from zero. Europa appears to have the most uniform surface, and Callisto the least uniform. Time variations are most characteristic of Io, which confirms the presence of volcanic activity on its surface. Based on the observed material, it can be seen that the intensity of light reflected from the front hemisphere of the first three moons: Io, Europa, and Ganymede, is less than the intensity of light reflected from the rear hemisphere, and in the case of the Callisto it is the opposite. The paper provides a convincing (natural, real) explanation of this fact.Keywords: Galilean moons, polarization, degree of polarization, photometry, front and rear hemispheres
Procedia PDF Downloads 10317669 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir
Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai
Abstract:
Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor
Procedia PDF Downloads 47817668 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task
Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli
Abstract:
Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making
Procedia PDF Downloads 25217667 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 29717666 Quince Seed Mucilage (QSD)/ Multiwall Carbonano Tube Hybrid Hydrogels as Novel Controlled Drug Delivery Systems
Authors: Raouf Alizadeh, Kadijeh Hemmati
Abstract:
The aim of this study is to synthesize several series of hydrogels from combination of a natural based polymer (Quince seed mucilage QSD), a synthetic copolymer contained methoxy poly ethylene glycol -polycaprolactone (mPEG-PCL) in the presence of different amount of multi-walled carbon nanotube (f-MWNT). Mono epoxide functionalized mPEG (mP EG-EP) was synthesized and reacted with sodium azide in the presence of NH4Cl to afford mPEG- N3(-OH). Then ring opening polymerization (ROP) of ε–caprolactone (CL) in the presence of mPEG- N3(-OH) as initiator and Sn(Oct)2 as catalyst led to preparation of mPEG-PCL- N3(-OH ) which was grafted onto propagylated f-MWNT by the click reaction to obtain mPEG-PCL- f-MWNT (-OH ). In the presence of mPEG- N3(-Br) and mixture of NHS/DCC/ QSD, hybrid hydrogels were successfully synthesized. The copolymers and hydrogels were characterized using different techniques such as, scanning electron microscope (SEM) and thermogravimetric analysis (TGA). The gel content of hydrogels showed dependence on the weight ratio of QSD:mPEG-PCL:f-MWNT. The swelling behavior of the prepared hydrogels was also studied under variation of pH, immersion time, and temperature. According to the results, the swelling behavior of the prepared hydrogels showed significant dependence in the gel content, pH, immersion time and temperature. The highest swelling was observed at room temperature, in 60 min and at pH 8. The loading and in-vitro release of quercetin as a model drug were investigated at pH of 2.2 and 7.4, and the results showed that release rate at pH 7.4 was faster than that at pH 2.2. The total loading and release showed dependence on the network structure of hydrogels and were in the range of 65- 91%. In addition, the cytotoxicity and release kinetics of the prepared hydrogels were also investigated.Keywords: antioxidant, drug delivery, Quince Seed Mucilage(QSD), swelling behavior
Procedia PDF Downloads 32117665 A Discovery of the Dual Sequential Pattern of Prime Numbers in P x P: Applications in a Formal Proof of the Twin-Prime Conjecture
Authors: Yingxu Wang
Abstract:
This work presents basic research on the recursive structures and dual sequential patterns of primes for the formal proof of the Twin-Prime Conjecture (TPC). A rigorous methodology of Twin-Prime Decomposition (TPD) is developed in MATLAB to inductively verify potential twins in the dual sequences of primes. The key finding of this basic study confirms that the dual sequences of twin primes are not only symmetric but also infinitive in the unique base 6 cycle, except a limited subset of potential pairs is eliminated by the lack of dual primality. Both theory and experiments have formally proven that the infinity of twin primes stated in TPC holds in the P x P space.Keywords: number theory, primes, twin-prime conjecture, dual primes (P x P), twin prime decomposition, formal proof, algorithm
Procedia PDF Downloads 6717664 Dielectric Properties of Ni-Al Nano Ferrites Synthesized by Citrate Gel Method
Authors: D. Ravinder, K. S. Nagaraju
Abstract:
Ni–Al ferrite with composition of NiAlxFe2-xO4 (x=0.2, 0.4 0.6, and 0.8, ) were prepared by citrate gel method. The dielectric properties for all the samples were investigated at room temperature as a function of frequency. The dielectric constant shows dispersion in the lower frequency region and remains almost constant at higher frequencies. The frequency dependence of dielectric loss tangent (tanδ) is found to be abnormal, giving a peak at certain frequency for mixed Ni-Al ferrites. A qualitative explanation is given for the composition and frequency dependence of the dielectric loss tangent.Keywords: ferrites, citrate method, lattice parameter, dielectric constant
Procedia PDF Downloads 30317663 Statistical Analysis to Select Evacuation Route
Authors: Zaky Musyarof, Dwi Yono Sutarto, Dwima Rindy Atika, R. B. Fajriya Hakim
Abstract:
Each country should be responsible for the safety of people, especially responsible for the safety of people living in disaster-prone areas. One of those services is provides evacuation route for them. But all this time, the selection of evacuation route is seem doesn’t well organized, it could be seen that when a disaster happen, there will be many accumulation of people on the steps of evacuation route. That condition is dangerous to people because hampers evacuation process. By some methods in Statistical analysis, author tries to give a suggestion how to prepare evacuation route which is organized and based on people habit. Those methods are association rules, sequential pattern mining, hierarchical cluster analysis and fuzzy logic.Keywords: association rules, sequential pattern mining, cluster analysis, fuzzy logic, evacuation route
Procedia PDF Downloads 50417662 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 15317661 Improving the Run Times of Existing and Historical Demand Models Using Simple Python Scripting
Authors: Abhijeet Ostawal, Parmjit Lall
Abstract:
The run times for a large strategic model that we were managing had become too long leading to delays in project delivery, increased costs and loss in productivity. Software developers are continuously working towards developing more efficient tools by changing their algorithms and processes. The issue faced by our team was how do you apply the latest technologies on validated existing models which are based on much older versions of software that do not have the latest software capabilities. The multi-model transport model that we had could only be run in sequential assignment order. Recent upgrades to the software now allowed the assignment to be run in parallel, a concept called parallelization. Parallelization is a Python script working only within the latest version of the software. A full model transfer to the latest version was not possible due to time, budget and the potential changes in trip assignment. This article is to show the method to adapt and update the Python script in such a way that it can be used in older software versions by calling the latest version and then recalling the old version for assignment model without affecting the results. Through a process of trial-and-error run time savings of up to 30-40% have been achieved. Assignment results were maintained within the older version and through this learning process we’ve applied this methodology to other even older versions of the software resulting in huge time savings, more productivity and efficiency for both client and consultant.Keywords: model run time, demand model, parallelisation, python scripting
Procedia PDF Downloads 11917660 Spatial Scale of Clustering of Residential Burglary and Its Dependence on Temporal Scale
Authors: Mohammed A. Alazawi, Shiguo Jiang, Steven F. Messner
Abstract:
Research has long focused on two main spatial aspects of crime: spatial patterns and spatial processes. When analyzing these patterns and processes, a key issue has been to determine the proper spatial scale. In addition, it is important to consider the possibility that these patterns and processes might differ appreciably for different temporal scales and might vary across geographic units of analysis. We examine the spatial-temporal dependence of residential burglary. This dependence is tested at varying geographical scales and temporal aggregations. The analyses are based on recorded incidents of crime in Columbus, Ohio during the 1994-2002 period. We implement point pattern analysis on the crime points using Ripley’s K function. The results indicate that spatial point patterns of residential burglary reveal spatial scales of clustering relatively larger than the average size of census tracts of the study area. Also, spatial scale is independent of temporal scale. The results of our analyses concerning the geographic scale of spatial patterns and processes can inform the development of effective policies for crime control.Keywords: inhomogeneous K function, residential burglary, spatial point pattern, spatial scale, temporal scale
Procedia PDF Downloads 34717659 Moderation in Temperature Dependence on Counter Frictional Coefficient and Prevention of Wear of C/C Composites by Synthesizing SiC around Surface and Internal Vacancies
Authors: Noboru Wakamoto, Kiyotaka Obunai, Kazuya Okubo, Toru Fujii
Abstract:
The aim of this study is to moderate the dependence of counter frictional coefficient on temperature between counter surfaces and to reduce the wear of C/C composites at low temperature. To modify the C/C composites, Silica (SiO2) powders were added into phenolic resin for carbon precursor. The preform plate of the precursor of C/C composites was prepared by conventional filament winding method. The C/C composites plates were obtained by carbonizing preform plate at 2200 °C under an argon atmosphere. At that time, the silicon carbides (SiC) were synthesized around the surfaces and the internal vacancies of the C/C composites. The frictional coefficient on the counter surfaces and specific wear volumes of the C/C composites were measured by our developed frictional test machine like pin-on disk type. The XRD indicated that SiC was synthesized in the body of C/C composite fabricated by current method. The results of friction test showed that coefficient of friction of unmodified C/C composites have temperature dependence when the test condition was changed. In contrast, frictional coefficient of the C/C composite modified with SiO2 powders was almost constant at about 0.27 when the temperature condition was changed from Room Temperature (RT) to 300 °C. The specific wear rate decreased from 25×10-6 mm2/N to 0.1×10-6 mm2/N. The observations of the surfaces after friction tests showed that the frictional surface of the modified C/C composites was covered with a film produced by the friction. This study found that synthesizing SiC around surface and internal vacancies of C/C composites was effective to moderate the dependence on the frictional coefficient and reduce to the abrasion of C/C composites.Keywords: C/C composites, friction coefficient, wear, SiC
Procedia PDF Downloads 34517658 Calculation Of Energy Gap Of (Ga,Mn)As Diluted Magnetic Semiconductor From The Eight-Band k.p Model
Authors: Khawlh A. Alzubaidi, Khadijah B. Alziyadi, Amor M. Alsayari
Abstract:
Now a days (Ga, Mn) is one of the most extensively studied and best understood diluted magnetic semiconductors. Also, the study of (Ga, Mn)As is a fervent research area since it allows to explore of a variety of novel functionalities and spintronics concepts that could be implemented in the future. In this work, we will calculate the energy gap of (Ga, Mn)As using the eight-band model. In the Hamiltonian, the effects of spin-orbit, spin-splitting, and strain will be considered. The dependence of the energy gap on Mn content, and the effect of the strain, which is varied continuously from tensile to compressive, will be studied. Finally, analytical expressions for the (Ga, Mn)As energy band gap, taking into account both parameters (Mn concentration and strain), will be provided.Keywords: energy gap, diluted magnetic semiconductors, k.p method, strain
Procedia PDF Downloads 124