Search results for: estimation of properties of the model
24891 3D Geomechanical Model the Best Solution of the 21st Century for Perforation's Problems
Authors: Luis Guiliana, Andrea Osorio
Abstract:
The lack of comprehension of the reservoir geomechanics conditions may cause operational problems that cost to the industry billions of dollars per year. The drilling operations at the Ceuta Field, Area 2 South, Maracaibo Lake, have been very expensive due to problems associated with drilling. The principal objective of this investigation is to develop a 3D geomechanical model in this area, in order to optimize the future drillings in the field. For this purpose, a 1D geomechanical model was built at first instance, following the workflow of the MEM (Mechanical Earth Model), this consists of the following steps: 1) Data auditing, 2) Analysis of drilling events and structural model, 3) Mechanical stratigraphy, 4) Overburden stress, 5) Pore pressure, 6) Rock mechanical properties, 7) Horizontal stresses, 8) Direction of the horizontal stresses, 9) Wellbore stability. The 3D MEM was developed through the geostatistic model of the Eocene C-SUP VLG-3676 reservoir and the 1D MEM. With this data the geomechanical grid was embedded. The analysis of the results threw, that the problems occurred in the wells that were examined were mainly due to wellbore stability issues. It was determined that the stress field change as the stratigraphic column deepens, it is normal to strike-slip at the Middle Miocene and Lower Miocene, and strike-slipe to reverse at the Eocene. In agreement to this, at the level of the Eocene, the most advantageous direction to drill is parallel to the maximum horizontal stress (157º). The 3D MEM allowed having a tridimensional visualization of the rock mechanical properties, stresses and operational windows (mud weight and pressures) variations. This will facilitate the optimization of the future drillings in the area, including those zones without any geomechanics information.Keywords: geomechanics, MEM, drilling, stress
Procedia PDF Downloads 27324890 Thermomechanical Damage Modeling of F114 Carbon Steel
Authors: A. El Amri, M. El Yakhloufi Haddou, A. Khamlichi
Abstract:
The numerical simulation based on the Finite Element Method (FEM) is widely used in academic institutes and in the industry. It is a useful tool to predict many phenomena present in the classical manufacturing forming processes such as fracture. But, the results of such numerical model depend strongly on the parameters of the constitutive behavior model. The influences of thermal and mechanical loads cause damage. The temperature and strain rate dependent materials’ properties and their modelling are discussed. A Johnson-Cook Model of damage has been selected for the numerical simulations. Virtual software called the ABAQUS 6.11 is used for finite element analysis. This model was introduced in order to give information concerning crack initiation during thermal and mechanical loads.Keywords: thermo-mechanical fatigue, failure, numerical simulation, fracture, damage
Procedia PDF Downloads 39324889 Investment Adjustments to Exchange Rate Fluctuations Evidence from Manufacturing Firms in Tunisia
Authors: Mourad Zmami Oussema BenSalha
Abstract:
The current research aims to assess empirically the reaction of private investment to exchange rate fluctuations in Tunisia using a sample of 548 firms operating in manufacturing industries between 1997 and 2002. The micro-econometric model we estimate is based on an accelerator-profit specification investment model increased by two variables that measure the variation and the volatility of exchange rates. Estimates using the system the GMM method reveal that the effects of the exchange rate depreciation on investment are negative since it increases the cost of imported capital goods. Turning to the exchange rate volatility, as measured by the GARCH (1,1) model, our findings assign a significant role to the exchange rate uncertainty in explaining the sluggishness of private investment in Tunisia in the full sample of firms. Other estimation attempts based on various sub samples indicate that the elasticities of investment relative to the exchange rate volatility depend upon many firms’ specific characteristics such as the size and the ownership structure.Keywords: investment, exchange rate volatility, manufacturing firms, system GMM, Tunisia
Procedia PDF Downloads 41224888 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 16024887 Using Complete Soil Particle Size Distributions for More Precise Predictions of Soil Physical and Hydraulic Properties
Authors: Habib Khodaverdiloo, Fatemeh Afrasiabi, Farrokh Asadzadeh, Martinus Th. Van Genuchten
Abstract:
The soil particle-size distribution (PSD) is known to affect a broad range of soil physical, mechanical and hydraulic properties. Complete descriptions of a PSD curve should provide more information about these properties as opposed to having only information about soil textural class or the soil sand, silt and clay (SSC) fractions. We compared the accuracy of 19 different models of the cumulative PSD in terms of fitting observed data from a large number of Iranian soils. Parameters of the six most promising models were correlated with measured values of the field saturated hydraulic conductivity (Kfs), the mean weight diameter of soil aggregates (MWD), bulk density (ρb), and porosity (∅). These same soil properties were correlated also with conventional PSD parameters (SSC fractions), selected geometric PSD parameters (notably the mean diameter dg and its standard deviation σg), and several other PSD parameters (D50 and D60). The objective was to find the best predictions of several soil physical quality indices and the soil hydraulic properties. Neither SSC nor dg, σg, D50 and D60 were found to have a significant correlation with both Kfs or logKfs, However, the parameters of several cumulative PSD models showed statistically significant correlation with Kfs and/or logKfs (|r| = 0.42 to 0.65; p ≤ 0.05). The correlation between MWD and the model parameters was generally also higher than either with SSC fraction and dg, or with D50 and D60. Porosity (∅) and the bulk density (ρb) also showed significant correlation with several PSD model parameters, with ρb additionally correlating significantly with various geometric (dg), mechanical (D50 and D60), and agronomic (clay and sand) representations of the PSD. The fitted parameters of selected PSD models furthermore showed statistically significant correlations with Kfs,, MWD and soil porosity, which may be viewed as soil quality indices. Results of this study are promising for developing more accurate pedotransfer functions.Keywords: particle size distribution, soil texture, hydraulic conductivity, pedotransfer functions
Procedia PDF Downloads 27924886 Investigation on Mesh Sensitivity of a Transient Model for Nozzle Clogging
Authors: H. Barati, M. Wu, A. Kharicha, A. Ludwig
Abstract:
A transient model for nozzle clogging has been developed and successfully validated against a laboratory experiment. Key steps of clogging are considered: transport of particles by turbulent flow towards the nozzle wall; interactions between fluid flow and nozzle wall, and the adhesion of the particle on the wall; the growth of the clog layer and its interaction with the flow. The current paper is to investigate the mesh (size and type) sensitivity of the model in both two and three dimensions. It is found that the algorithm for clog growth alone excluding the flow effect is insensitive to the mesh type and size, but the calculation including flow becomes sensitive to the mesh quality. The use of 2D meshes leads to overestimation of the clog growth because the 3D nature of flow in the boundary layer cannot be properly solved by 2D calculation. 3D simulation with tetrahedron mesh can also lead to an error estimation of the clog growth. A mesh-independent result can be achieved with hexahedral mesh, or at least with triangular prism (inflation layer) for near-wall regions.Keywords: clogging, continuous casting, inclusion, simulation, submerged entry nozzle
Procedia PDF Downloads 28424885 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis
Abstract:
In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data
Procedia PDF Downloads 33524884 Construction Unit Rate Factor Modelling Using Neural Networks
Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula
Abstract:
Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty-five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using the neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility, overhead and profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.Keywords: construction cost factors, neural networks, roadworks, Zambian construction industry
Procedia PDF Downloads 36624883 Atmospheric CO2 Capture via Temperature/Vacuum Swing Adsorption in SIFSIX-3-Ni
Authors: Eleni Tsalaporta, Sebastien Vaesen, James M. D. MacElroy, Wolfgang Schmitt
Abstract:
Carbon dioxide capture has attracted the attention of many governments, industries and scientists over the last few decades, due to the rapid increase in atmospheric CO2 composition, with several studies being conducted in this area over the last few years. In many of these studies, CO2 capture in complex Pressure Swing Adsorption (PSA) cycles has been associated with high energy consumption despite the promising capture performance of such processes. The purpose of this study is the economic capture of atmospheric carbon dioxide for its transformation into a clean type of energy. A single column Temperature /Vacuum Swing Adsorption (TSA/VSA) process is proposed as an alternative option to multi column Pressure Swing Adsorption (PSA) processes. The proposed adsorbent is SIFSIX-3-Ni, a newly developed MOF (Metal Organic Framework), with extended CO2 selectivity and capacity. There are three stages involved in this paper: (i) SIFSIX-3-Ni is synthesized and pelletized and its physical and chemical properties are examined before and after the pelletization process, (ii) experiments are designed and undertaken for the estimation of the diffusion and adsorption parameters and limitations for CO2 undergoing capture from the air; and (iii) the CO2 adsorption capacity and dynamical characteristics of SIFSIX-3-Ni are investigated both experimentally and mathematically by employing a single column TSA/VSA, for the capture of atmospheric CO2. This work is further supported by a technical-economical study for the estimation of the investment cost and the energy consumption of the single column TSA/VSA process. The simulations are performed using gProms.Keywords: carbon dioxide capture, temperature/vacuum swing adsorption, metal organic frameworks, SIFSIX-3-Ni
Procedia PDF Downloads 26324882 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter
Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri
Abstract:
Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion
Procedia PDF Downloads 69624881 Evaluation of the Elastic Mechanical Properties of a Hybrid Adhesive Material
Authors: Moudar H. A. Zgoul, Amin Al Zamer
Abstract:
Adhesive materials and adhesion have been the focal point of multiple research works related to numerous applications, particularly, aerospace, and aviation industries. To enhance the properties of conventional adhesive materials, additives have been introduced to the mix in order to enhance their mechanical and physical properties by creating a hybrid adhesive material. The evaluation of the mechanical properties of such hybrid adhesive materials is thus of an essential requirement for the purpose of properly modeling their behavior accurately. This paper presents an approach/tool to simulate the behavior such hybrid adhesives in a way that will allow researchers to better understand their behavior while in service.Keywords: adhesive materials, analysis, hybrid adhesives, mechanical properties, simulation
Procedia PDF Downloads 42024880 Mechanical Properties and Microstructure of Ultra-High Performance Concrete Containing Fly Ash and Silica Fume
Authors: Jisong Zhang, Yinghua Zhao
Abstract:
The present study investigated the mechanical properties and microstructure of Ultra-High Performance Concrete (UHPC) containing supplementary cementitious materials (SCMs), such as fly ash (FA) and silica fume (SF), and to verify the synergistic effect in the ternary system. On the basis of 30% fly ash replacement, the incorporation of either 10% SF or 20% SF show a better performance compared to the reference sample. The efficiency factor (k-value) was calculated as a synergistic effect to predict the compressive strength of UHPC with these SCMs. The SEM of micrographs and pore volume from BJH method indicate a high correlation with compressive strength. Further, an artificial neural networks model was constructed for prediction of the compressive strength of UHPC containing these SCMs.Keywords: artificial neural network, fly ash, mechanical properties, ultra-high performance concrete
Procedia PDF Downloads 41624879 DOA Estimation Using Golden Section Search
Authors: Niharika Verma, Sandeep Santosh
Abstract:
DOA technique is a localization technique used in the communication field. Various algorithms have been developed for direction of arrival estimation like MUSIC, ROOT MUSIC, etc. These algorithms depend on various parameters like antenna array elements, number of snapshots and various others. Basically the MUSIC spectrum is evaluated and peaks obtained are considered as the angle of arrivals. The angles evaluated using this process depends on the scanning interval chosen. The accuracy of the results obtained depends on the coarseness of the interval chosen. In this paper, golden section search is applied to the MUSIC algorithm and therefore, more accurate results are achieved. Initially the coarse DOA estimations is done using the MUSIC algorithm in the range -90 to 90 degree at the interval of 10 degree. After the peaks obtained then fine DOA estimation is done using golden section search. Also, the partitioning method is applied to estimate the number of signals incident on the antenna array. Dependency of the algorithm on the number of snapshots is also being explained. Hence, the accurate results are being determined using this algorithm.Keywords: Direction of Arrival (DOA), golden section search, MUSIC, number of snapshots
Procedia PDF Downloads 44724878 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure
Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu
Abstract:
In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency
Procedia PDF Downloads 17824877 A Fuzzy Nonlinear Regression Model for Interval Type-2 Fuzzy Sets
Authors: O. Poleshchuk, E. Komarov
Abstract:
This paper presents a regression model for interval type-2 fuzzy sets based on the least squares estimation technique. Unknown coefficients are assumed to be triangular fuzzy numbers. The basic idea is to determine aggregation intervals for type-1 fuzzy sets, membership functions of whose are low membership function and upper membership function of interval type-2 fuzzy set. These aggregation intervals were called weighted intervals. Low and upper membership functions of input and output interval type-2 fuzzy sets for developed regression models are considered as piecewise linear functions.Keywords: interval type-2 fuzzy sets, fuzzy regression, weighted interval
Procedia PDF Downloads 37624876 Modeling Environmental, Social, and Governance Financial Assets with Lévy Subordinated Processes and Option Pricing
Authors: Abootaleb Shirvani, Svetlozar Rachev
Abstract:
ESG stands for Environmental, Social, and Governance and is a non-financial factor that investors use to specify material risks and growth opportunities in their analysis process. ESG ratings provide a quantitative measure of socially responsible investment, and it is essential to incorporate ESG ratings when modeling the dynamics of asset returns. In this article, we propose a triple subordinated Lévy process for incorporating numeric ESG ratings into dynamic asset pricing theory to model the time series properties of the stock returns. The motivation for introducing three layers of subordinator is twofold. The first two layers of subordinator capture the skew and fat-tailed properties of the stock return distribution that cannot be explained well by the existing Lévy subordinated model. The third layer of the subordinator introduces ESG valuation and incorporates numeric ESG ratings into dynamic asset pricing theory and option pricing. We employ the triple subordinator Lévy model for developing the ESG-valued stock return model, derive the implied ESG score surfaces for Microsoft, Apple, and Amazon stock returns, and compare the shape of the ESG implied surface scores for these stocks.Keywords: ESG scores, dynamic asset pricing theory, multiple subordinated modeling, Lévy processes, option pricing
Procedia PDF Downloads 8324875 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 4224874 Supersymmetry versus Compositeness: 2-Higgs Doublet Models Tell the Story
Authors: S. De Curtis, L. Delle Rose, S. Moretti, K. Yagyu
Abstract:
Supersymmetry and compositeness are the two prevalent paradigms providing both a solution to the hierarchy problem and motivation for a light Higgs boson state. An open door towards the solution is found in the context of 2-Higgs Doublet Models (2HDMs), which are necessary to supersymmetry and natural within compositeness in order to enable Electro-Weak Symmetry Breaking. In scenarios of compositeness, the two isospin doublets arise as pseudo Nambu-Goldstone bosons from the breaking of SO(6). By calculating the Higgs potential at one-loop level through the Coleman-Weinberg mechanism from the explicit breaking of the global symmetry induced by the partial compositeness of fermions and gauge bosons, we derive the phenomenological properties of the Higgs states and highlight the main signatures of this Composite 2-Higgs Doublet Model at the Large Hadron Collider. These include modifications to the SM-like Higgs couplings as well as production and decay channels of heavier Higgs bosons. We contrast the properties of this composite scenario to the well-known ones established in supersymmetry, with the MSSM being the most notorious example. We show how 2HDM spectra of masses and couplings accessible at the Large Hadron Collider may allow one to distinguish between the two paradigms.Keywords: beyond the standard model, composite Higgs, supersymmetry, Two-Higgs Doublet Model
Procedia PDF Downloads 12724873 Estimation of Soil Moisture at High Resolution through Integration of Optical and Microwave Remote Sensing and Applications in Drought Analyses
Authors: Donglian Sun, Yu Li, Paul Houser, Xiwu Zhan
Abstract:
California experienced severe drought conditions in the past years. In this study, the drought conditions in California are analyzed using soil moisture anomalies derived from integrated optical and microwave satellite observations along with auxiliary land surface data. Based on the U.S. Drought Monitor (USDM) classifications, three typical drought conditions were selected for the analysis: extreme drought conditions in 2007 and 2013, severe drought conditions in 2004 and 2009, and normal conditions in 2005 and 2006. Drought is defined as negative soil moisture anomaly. To estimate soil moisture at high spatial resolutions, three approaches are explored in this study: the universal triangle model that estimates soil moisture from Normalized Difference Vegetation Index (NDVI) and Land Surface Temperature (LST); the basic model that estimates soil moisture under different conditions with auxiliary data like precipitation, soil texture, topography, and surface types; and the refined model that uses accumulated precipitation and its lagging effects. It is found that the basic model shows better agreements with the USDM classifications than the universal triangle model, while the refined model using precipitation accumulated from the previous summer to current time demonstrated the closest agreements with the USDM patterns.Keywords: soil moisture, high resolution, regional drought, analysis and monitoring
Procedia PDF Downloads 13824872 Forecast of Polyethylene Properties in the Gas Phase Polymerization Aided by Neural Network
Authors: Nasrin Bakhshizadeh, Ashkan Forootan
Abstract:
A major problem that affects the quality control of polymer in the industrial polymerization is the lack of suitable on-line measurement tools to evaluate the properties of the polymer such as melt and density indices. Controlling the polymerization in ordinary method is performed manually by taking samples, measuring the quality of polymer in the lab and registry of results. This method is highly time consuming and leads to producing large number of incompatible products. An online application for estimating melt index and density proposed in this study is a neural network based on the input-output data of the polyethylene production plant. Temperature, the level of reactors' bed, the intensity of ethylene mass flow, hydrogen and butene-1, the molar concentration of ethylene, hydrogen and butene-1 are used for the process to establish the neural model. The neural network is taught based on the actual operational data and back-propagation and Levenberg-Marquart techniques. The simulated results indicate that the neural network process model established with three layers (one hidden layer) for forecasting the density and the four layers for the melt index is able to successfully predict those quality properties.Keywords: polyethylene, polymerization, density, melt index, neural network
Procedia PDF Downloads 14424871 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database
Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan
Abstract:
Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database
Procedia PDF Downloads 57824870 Estimation of Sediment Transport into a Reservoir Dam
Authors: Kiyoumars Roushangar, Saeid Sadaghian
Abstract:
Although accurate sediment load prediction is very important in planning, designing, operating and maintenance of water resources structures, the transport mechanism is complex, and the deterministic transport models are based on simplifying assumptions often lead to large prediction errors. In this research, firstly, two intelligent ANN methods, Radial Basis and General Regression Neural Networks, are adopted to model of total sediment load transport into Madani Dam reservoir (north of Iran) using the measured data and then applicability of the sediment transport methods developed by Engelund and Hansen, Ackers and White, Yang, and Toffaleti for predicting of sediment load discharge are evaluated. Based on comparison of the results, it is found that the GRNN model gives better estimates than the sediment rating curve and mentioned classic methods.Keywords: sediment transport, dam reservoir, RBF, GRNN, prediction
Procedia PDF Downloads 49924869 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases
Authors: Daniel C. Bonzo
Abstract:
Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.Keywords: clustered data, estimand, extrapolation, mixed model
Procedia PDF Downloads 13724868 Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery
Authors: Shreedevi Moharana, Subashisa Dutta
Abstract:
The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography.Keywords: crop parameters, index model, LISS IV imagery, plot scale, shape function
Procedia PDF Downloads 16824867 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia
Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza
Abstract:
In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant
Procedia PDF Downloads 46824866 Age Estimation from Teeth among North Indian Population: Comparison and Reliability of Qualitative and Quantitative Methods
Authors: Jasbir Arora, Indu Talwar, Daisy Sahni, Vidya Rattan
Abstract:
Introduction: Age estimation is a crucial step to build the identity of a person, both in case of deceased and alive. In adults, age can be estimated on the basis of six regressive (Attrition, Secondary dentine, Dentine transparency, Root resorption, Cementum apposition and Periodontal Disease) changes in teeth qualitatively using scoring system and quantitatively by micrometric method. The present research was designed to establish the reliability of qualitative (method 1) and quantitative (method 2) of age estimation among North Indians and to compare the efficacy of these two methods. Method: 250 single-rooted extracted teeth (18-75 yrs.) were collected from Department of Oral Health Sciences, PGIMER, Chandigarh. Before extraction, periodontal score of each tooth was noted. Labiolingual sections were prepared and examined under light microscope for regressive changes. Each parameter was scored using Gustafson’s 0-3 point score system (qualitative), and total score was calculated. For quantitative method, each regressive change was measured quantitatively in form of 18 micrometric parameters under microscope with the help of measuring eyepiece. Age was estimated using linear and multiple regression analysis in Gustafson’s method and Kedici’s method respectively. Estimated age was compared with actual age on the basis of absolute mean error. Results: In pooled data, by Gustafson’s method, significant correlation (r= 0.8) was observed between total score and actual age. Total score generated an absolute mean error of ±7.8 years. Whereas, for Kedici’s method, a value of correlation coefficient of r=0.5 (p<0.01) was observed between all the eighteen micrometric parameters and known age. Using multiple regression equation, age was estimated, and an absolute mean error of age was found to be ±12.18 years. Conclusion: Gustafson’s (qualitative) method was found to be a better predictor for age estimation among North Indians.Keywords: forensic odontology, age estimation, North India, teeth
Procedia PDF Downloads 24224865 Physicochemical Characterization of Coastal Aerosols over the Mediterranean Comparison with Weather Research and Forecasting-Chem Simulations
Authors: Stephane Laussac, Jacques Piazzola, Gilles Tedeschi
Abstract:
Estimation of the impact of atmospheric aerosols on the climate evolution is an important scientific challenge. One of a major source of particles is constituted by the oceans through the generation of sea-spray aerosols. In coastal areas, marine aerosols can affect air quality through their ability to interact chemically and physically with other aerosol species and gases. The integration of accurate sea-spray emission terms in modeling studies is then required. However, it was found that sea-spray concentrations are not represented with the necessary accuracy in some situations, more particularly at short fetch. In this study, the WRF-Chem model was implemented on a North-Western Mediterranean coastal region. WRF-Chem is the Weather Research and Forecasting (WRF) model online-coupled with chemistry for investigation of regional-scale air quality which simulates the emission, transport, mixing, and chemical transformation of trace gases and aerosols simultaneously with the meteorology. One of the objectives was to test the ability of the WRF-Chem model to represent the fine details of the coastal geography to provide accurate predictions of sea spray evolution for different fetches and the anthropogenic aerosols. To assess the performance of the model, a comparison between the model predictions using a local emission inventory and the physicochemical analysis of aerosol concentrations measured for different wind direction on the island of Porquerolles located 10 km south of the French Riviera is proposed.Keywords: sea-spray aerosols, coastal areas, sea-spray concentrations, short fetch, WRF-Chem model
Procedia PDF Downloads 19624864 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 14724863 Advertising Incentives of National Brands against Private Labels: The Case of OTC Heartburn Drugs
Authors: Lu Liao
Abstract:
The worldwide expansion of private labels over the past two decades not only transformed the choice sets of consumers but also forced manufacturers of national brands to design new marketing strategies to maintain their market positions. This paper empirically analyzes the impact of private labels on advertising incentives of national brands. The paper first develops a consumer demand model that incorporates spillover effects of advertising and finds positive spillovers of national brands’ advertising on demand for private label products. With the demand estimates, the researcher simulates the equilibrium prices and advertising levels for leading national brands in a counterfactual where private labels are eliminated to quantify the changes in national brands’ advertising incentives in response to the rise of private labels.Keywords: advertising, demand estimation, spillover effect, structural model
Procedia PDF Downloads 2924862 F-VarNet: Fast Variational Network for MRI Reconstruction
Authors: Omer Cahana, Maya Herman, Ofer Levi
Abstract:
Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.Keywords: MRI, deep learning, variational network, computer vision, compress sensing
Procedia PDF Downloads 163