Search results for: quantile function model
20000 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 31519999 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks
Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian
Abstract:
Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation
Procedia PDF Downloads 44919998 The Application of Variable Coefficient Jacobian elliptic Function Method to Differential-Difference Equations
Authors: Chao-Qing Dai
Abstract:
In modern nonlinear science and textile engineering, nonlinear differential-difference equations are often used to describe some nonlinear phenomena. In this paper, we extend the variable coefficient Jacobian elliptic function method, which was used to find new exact travelling wave solutions of nonlinear partial differential equations, to nonlinear differential-difference equations. As illustration, we derive two series of Jacobian elliptic function solutions of the discrete sine-Gordon equation.Keywords: discrete sine-Gordon equation, variable coefficient Jacobian elliptic function method, exact solutions, equation
Procedia PDF Downloads 66619997 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time
Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar
Abstract:
The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors
Procedia PDF Downloads 7219996 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method
Authors: Jiahui You, Kyung Jae Lee
Abstract:
Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.Keywords: reactive-transport , Shale, Kerogen, precipitation
Procedia PDF Downloads 16219995 Allostatic Load as a Predictor of Adolescents’ Executive Function: A Longitudinal Network Analysis
Authors: Sipu Guo, Silin Huang
Abstract:
Background: Most studies investigate the link between executive function and allostatic load (AL) among adults aged 18 years and older. Studies differed regarding the specific biological indicators studied and executive functions accounted for. Specific executive functions may be differentially related to allostatic load. We investigated the comorbidities of executive functions and allostatic load via network analysis. Methods: We included 603 adolescents (49.84% girls; Mean age = 12.38, SD age = 1.79) from junior high school in rural China. Eight biological markers at T1 and four executive function tasks at T2 were used to evaluate networks. Network analysis was used to determine the network structure, core symptoms, and bridge symptoms in the AL-executive function network among rural adolescents. Results: The executive functions were related to 6 AL biological markers, not to cortisol and epinephrine. The most influential symptoms were inhibition control, cognitive flexibility, processing speed, and systolic blood pressure (SBP). SBP, dehydroepiandrosterone, and processing speed were the bridges through which AL was related to executive functions. dehydroepiandrosterone strongly predicted processing speed. The SBP was the biggest influencer in the entire network. Conclusions: We found evidence for differential relations between markers and executive functions. SBP was a driver in the network; dehydroepiandrosterone showed strong relations with executive function.Keywords: allostatic load, executive function, network analysis, rural adolescent
Procedia PDF Downloads 5019994 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments
Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing
Abstract:
Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.Keywords: central composite design, CO2 liquefaction, latin hypercube sampling, simulation-based optimization
Procedia PDF Downloads 16419993 Analytical Modeling of Globular Protein-Ferritin in α-Helical Conformation: A White Noise Functional Approach
Authors: Vernie C. Convicto, Henry P. Aringa, Wilson I. Barredo
Abstract:
This study presents a conformational model of the helical structures of globular protein particularly ferritin in the framework of white noise path integral formulation by using Associated Legendre functions, Bessel and convolution of Bessel and trigonometric functions as modulating functions. The model incorporates chirality features of proteins and their helix-turn-helix sequence structural motif.Keywords: globular protein, modulating function, white noise, winding probability
Procedia PDF Downloads 47219992 A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm
Authors: Anuj Abraham, N. Pappa, Daniel Honc, Rahul Sharma
Abstract:
In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA) is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results.Keywords: balanced truncation, clustering, dominant pole, Hankel norm, model reduction
Procedia PDF Downloads 59719991 Correlation Between Diastolic Function and Lower GLS in Hypertensive Patients
Authors: A. Kherraf, S. Ouarrak, L. Azzouzi, R. Habbal
Abstract:
Introduction: Preserved LVEF heart failure is an important cause of mortality and morbidity in hypertensive patients. A strong correlation between impaired diastolic function and longitudinal systolic dysfunction. could have several explanations, first, the diastole is an energy dependent process, especially during its first phase, it also includes active systolic components during the phase of iso volumetric relaxation, in addition, the impairment of the intrinsic myocytic function is part of hypertensive pathology as evidenced by recent studies. METHODS AND MATERIALS: This work consists of performing in a series of 333 hypertensive patients (aged 25 to 75 years) a complete echocardiographic study, including LVEF by Simpson biplane method, the calculation of the indexed left ventricular mass, the analysis of the diastolic function, and finally, the study of the longitudinal deformation of the LV by the technique of speckletracking (calculation of the GLS). Patients with secondary hypertension, leaky or stenosing valve disease, arrhythmia, and a history of coronary insufficiency were excluded from this study. RESULTS: Of the 333 hypertensive patients, 225 patients (67.5%) had impaired diastolic function, of which 60 patients (18%) had high filling pressures. 49.39% had echocardigraphic HVG, Almost all of these patients (60 patients) had low GLS. There is a statistically very significant relationship between lower GLS and increased left ventricular filling pressures in hypertensive patients. These results suggest that increased filling pressures are closely associated with atrioventricular interaction in patients with hypertension, with a strong correlation with impairment of longitudinal systolic function and diastolic function CONCLUSION: Overall, a linear relationship is established between increased left ventricular mass, diastolic dysfunction, and longitudinal LV systolic dysfunctionKeywords: hypertension, diastolic function, left ventricle, heart failure
Procedia PDF Downloads 12519990 The Impact of Audit Committee Industry Expertise on Internal Audit Function
Authors: Abdulaziz Alzeban
Abstract:
This study examines whether internal audit function is indeed greater when audit committee members have industry expertise combined with auditing expertise. Data from a survey of 64 chief internal auditors from companies registered on the Saudi Stock Exchange TADAWL, provides results that suggest that when audit committee members possess both industry expertise and auditing expertise, the committee’s role in improving the quality of internal audit is enhanced. This outcome is concluded as one that can be generalized beyond the Saudi Arabian context.Keywords: internal audit, audit committee, industry expertise, function
Procedia PDF Downloads 35519989 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning
Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag
Abstract:
The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling
Procedia PDF Downloads 8919988 The Effect of Impact on the Knee Joint Due to the Shocks during Double Impact Phase of Gait Cycle
Authors: Jobin Varghese, V. M. Akhil, P. K. Rajendrakumar, K. S. Sivanandan
Abstract:
The major contributor to the human locomotion is the knee flexion and extension. During heel strike, a huge amount of energy is transmitted through the leg towards knee joint, which in fact is damped at heel and leg muscles. During high shocks, although it is damped to a certain extent, the balance force transmits towards knee joint which could damage the knee. Due to the vital function of the knee joint, it should be protected against damage due to additional load acting on it. This work concentrates on the development of spring mass damper system which exactly replicates the stiffness at the heel and muscles and the objective function is optimized to minimize the force acting at the knee joint. Further, the data collected using force plate are put into the model to verify its integrity and are found to be in good agreement.Keywords: spring, mass, damper, knee joint
Procedia PDF Downloads 26919987 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification
Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong
Abstract:
It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization
Procedia PDF Downloads 8319986 Exploring the Entrepreneur-Function in Uncertainty: Towards a Revised Definition
Authors: Johan Esbach
Abstract:
The entrepreneur has traditionally been defined through various historical lenses, emphasising individual traits, risk-taking, speculation, innovation and firm creation. However, these definitions often fail to address the dynamic nature of the modern entrepreneurial functions, which respond to unpredictable uncertainties and transition to routine management as certainty is achieved. This paper proposes a revised definition, positioning the entrepreneur as a dynamic function rather than a human construct, that emerges to address specific uncertainties in economic systems, but fades once uncertainty is resolved. By examining historical definitions and its limitations, including the works of Cantillon, Say, Schumpeter, and Knight, this paper identifies a gap in literature and develops a generalised definition for the entrepreneur. The revised definition challenges conventional thought by shifting focus from static attributes such as alertness, traits, firm creation, etc., to a dynamic role that includes reliability, adaptation, scalability, and adaptability. The methodology of this paper employs a mixed approach, combining theoretical analysis and case study examination to explore the dynamic nature of the entrepreneurial function in relation to uncertainty. The selection of case studies includes companies like Airbnb, Uber, Netflix, and Tesla, as these firms demonstrate a clear transition from entrepreneurial uncertainty to routine certainty. The data from the case studies is then analysed qualitatively, focusing on the patterns of entrepreneurial function across the selected companies. These results are then validated using quantitative analysis, derived from an independent survey. The primary finding of the paper will validate the entrepreneur as a dynamic function rather than a static, human-centric role. In considering the transition from uncertainty to certainty in companies like Airbnb, Uber, Netflix, and Tesla, the study shows that the entrepreneurial function emerges explicitly to address market, technological, or social uncertainties. Once these uncertainties are resolved and a certainty in the operating environment is established, the need for the entrepreneurial function ceases, giving way to routine management and business operations. The paper emphasises the need for a definitive model that responds to the temporal and contextualised nature of the entrepreneur. In adopting the revised definition, the entrepreneur is positioned to play a crucial role in the reduction of uncertainties within economic systems. Once the uncertainties are addressed, certainty is manifested in new combinations or new firms. Finally, the paper outlines policy implications for fostering environments that enables the entrepreneurial function and transition theory.Keywords: dynamic function, uncertainty, revised definition, transition
Procedia PDF Downloads 1919985 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation
Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos
Abstract:
One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).Keywords: code generation, MATLAB, tunable parameters, TwinCAT
Procedia PDF Downloads 22519984 Lung Function, Urinary Heavy Metals And ITS Other Influencing Factors Among Community In Klang Valley
Authors: Ammar Amsyar Abdul Haddi, Mohd Hasni Jaafar
Abstract:
Heavy metals are elements naturally presented in the environment that can cause adverse effect to health. But not much literature was found on effects toward lung function, where impairment of lung function may lead to various lung diseases. The objective of the study is to explore the lung function impairment, urinary heavy metal level, and its associated factors among the community in Klang valley, Malaysia. Sampling was done in Kuala Lumpur suburb public and housing areas during community events throughout March 2019 till October 2019. respondents who gave the consent were given a questionnaire to answer and was proceeded with a lung function test. Urine samples were obtained at the end of the session and sent for Inductively coupled plasma mass spectrometry (ICP-MS) analysis for heavy metal cadmium (Cd) and lead (Pb) concentration. A total of 200 samples were analysed, and of all, 52% of respondents were male, Age ranging from 18 years old to 74 years old with a mean age of 38.44. Urinary samples show that 12% of the respondent (n=22) has Cd level above than average, and 1.5 % of the respondent (n=3) has urinary Pb at an above normal level. Bivariate analysis show that there was a positive correlation between urinary Cd and urinary Pb (r= 0.309; p<0.001). Furthermore, there was a negative correlation between urinary Cd level and full vital capacity (FVC) (r=-0.202, p=0.004), Force expiratory volume at 1 second (FEV1) (r = -0.225, p=0.001), and also with Force expiratory flow between 25-75% FVC (FEF25%-75%) (r= -0.187, p=0.008). however, urinary Pb did not show any association with FVC, FEV1, FEV1/FVC, or FEF25%-75%. Multiple linear regression analysis shows that urinary Cd remained significant and negatively affect FVC% (p=0.025) and FEV1% (p=0.004) achieved from the predicted value. On top of that, other factors such as education level (p=0.013) and duration of smoking(p=0.003) may influencing both urinary Cd and performance in lung function as well, suggesting Cd as a potential mediating factor between smoking and impairment of lung function. however, there was no interaction detected between heavy metal or other influencing factor in this study. In short, there is a negative linear relationship detected between urinary Cd and lung function, and urinary Cd is likely to affects lung function in a restrictive pattern. Since smoking is also an influencing factor for urinary Cd and lung function impairment, it is highly suggested that smokers should be screened for lung function and urinary Cd level in the future for early disease prevention.Keywords: lung function, heavy metals, community
Procedia PDF Downloads 15419983 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 8819982 Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery
Authors: Shreedevi Moharana, Subashisa Dutta
Abstract:
The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography.Keywords: crop parameters, index model, LISS IV imagery, plot scale, shape function
Procedia PDF Downloads 16619981 Financial Liberalization, Exchange Rates and Demand for Money in Developing Economies: The Case of Nigeria, Ghana and Gambia
Authors: John Adebayo Oloyhede
Abstract:
This paper examines effect of financial liberalization on the stability of the demand for money function and its implication for exchange rate behaviour of three African countries. As the demand for money function is regarded as one of the two main building blocks of most exchange rate determination models, the other being purchasing power parity, its stability is required for the monetary models of exchange rate determination to hold. To what extent has the liberalisation policy of these countries, for instance liberalised interest rate, affected the demand for money function and what has been the consequence on the validity and relevance of floating exchange rate models? The study adopts the Autoregressive Instrumental Package (AIV) of multiple regression technique and followed the Almon Polynomial procedure with zero-end constraint. Data for the period 1986 to 2011 were drawn from three developing countries of Africa, namely: Gambia, Ghana and Nigeria, which did not only start the liberalization and floating system almost at the same period but share similar and diverse economic and financial structures. Its findings show that the demand for money was a stable function of income and interest rate at home and abroad. Other factors such as exchange rate and foreign interest rate exerted some significant effect on domestic money demand. The short-run and long-run elasticity with respect to income, interest rates, expected inflation rate and exchange rate expectation are not greater than zero. This evidence conforms to some extent to the expected behaviour of the domestic money function and underscores its ability to serve as good building block or assumption of the monetary model of exchange rate determination. This will, therefore, assist appropriate monetary authorities in the design and implementation of further financial liberalization policy packages in developing countries.Keywords: financial liberalisation, exchange rates, demand for money, developing economies
Procedia PDF Downloads 36919980 Real-Time Classification of Hemodynamic Response by Functional Near-Infrared Spectroscopy Using an Adaptive Estimation of General Linear Model Coefficients
Authors: Sahar Jahani, Meryem Ayse Yucel, David Boas, Seyed Kamaledin Setarehdan
Abstract:
Near-infrared spectroscopy allows monitoring of oxy- and deoxy-hemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is usually affected by natural physiological hemodynamic (systemic interferences) which occur in all body tissues including brain tissue. This makes HRF extraction a very challenging task. In this study, we used Kalman filter based on a general linear model (GLM) of brain activity to define the proportion of systemic interference in the brain hemodynamic. The performance of the proposed algorithm is evaluated in terms of the peak to peak error (Ep), mean square error (MSE), and Pearson’s correlation coefficient (R2) criteria between the estimated and the simulated hemodynamic responses. This technique also has the ability of real time estimation of single trial functional activations as it was applied to classify finger tapping versus resting state. The average real-time classification accuracy of 74% over 11 subjects demonstrates the feasibility of developing an effective functional near infrared spectroscopy for brain computer interface purposes (fNIRS-BCI).Keywords: hemodynamic response function, functional near-infrared spectroscopy, adaptive filter, Kalman filter
Procedia PDF Downloads 16019979 Parallel Evaluation of Sommerfeld Integrals for Multilayer Dyadic Green's Function
Authors: Duygu Kan, Mehmet Cayoren
Abstract:
Sommerfeld-integrals (SIs) are commonly encountered in electromagnetics problems involving analysis of antennas and scatterers embedded in planar multilayered media. Generally speaking, the analytical solution of SIs is unavailable, and it is well known that numerical evaluation of SIs is very time consuming and computationally expensive due to the highly oscillating and slowly decaying nature of the integrands. Therefore, fast computation of SIs has a paramount importance. In this paper, a parallel code has been developed to speed up the computation of SI in the framework of calculation of dyadic Green’s function in multilayered media. OpenMP shared memory approach is used to parallelize the SI algorithm and resulted in significant time savings. Moreover accelerating the computation of dyadic Green’s function is discussed based on the parallel SI algorithm developed.Keywords: Sommerfeld-integrals, multilayer dyadic Green’s function, OpenMP, shared memory parallel programming
Procedia PDF Downloads 24419978 Determination of Inflow Performance Relationship for Naturally Fractured Reservoirs: Numerical Simulation Study
Authors: Melissa Ramirez, Mohammad Awal
Abstract:
The Inflow Performance Relationship (IPR) of a well is a relation between the oil production rate and flowing bottom-hole pressure. This relationship is an important tool for petroleum engineers to understand and predict the well performance. In the petroleum industry, IPR correlations are used to design and evaluate well completion, optimizing well production, and designing artificial lift. The most commonly used IPR correlations models are Vogel and Wiggins, these models are applicable to homogeneous and isotropic reservoir data. In this work, a new IPR model is developed to determine inflow performance relationship of oil wells in a naturally fracture reservoir. A 3D black-oil reservoir simulator is used to develop the oil mobility function for the studied reservoir. Based on simulation runs, four flow rates are run to record the oil saturation and calculate the relative permeability for a naturally fractured reservoir. The new method uses the result of a well test analysis along with permeability and pressure-volume-temperature data in the fluid flow equations to obtain the oil mobility function. Comparisons between the new method and two popular correlations for non-fractured reservoirs indicate the necessity for developing and using an IPR correlation specifically developed for a fractured reservoir.Keywords: inflow performance relationship, mobility function, naturally fractured reservoir, well test analysis
Procedia PDF Downloads 27719977 Design of EV Steering Unit Using AI Based on Estimate and Control Model
Authors: Seong Jun Yoon, Jasurbek Doliev, Sang Min Oh, Rodi Hartono, Kyoojae Shin
Abstract:
Electric power steering (EPS), which is commonly used in electric vehicles recently, is an electric-driven steering device for vehicles. Compared to hydraulic systems, EPS offers advantages such as simple system components, easy maintenance, and improved steering performance. However, because the EPS system is a nonlinear model, difficult problems arise in controller design. To address these, various machine learning and artificial intelligence approaches, notably artificial neural networks (ANN), have been applied. ANN can effectively determine relationships between inputs and outputs in a data-driven manner. This research explores two main areas: designing an EPS identifier using an ANN-based backpropagation (BP) algorithm and enhancing the EPS system controller with an ANN-based Levenberg-Marquardt (LM) algorithm. The proposed ANN-based BP algorithm shows superior performance and accuracy compared to linear transfer function estimators, while the LM algorithm offers better input angle reference tracking and faster response times than traditional PID controllers. Overall, the proposed ANN methods demonstrate significant promise in improving EPS system performance.Keywords: ANN backpropagation modelling, electric power steering, transfer function estimator, electrical vehicle driving system
Procedia PDF Downloads 4119976 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: DBSCAN, potential function, speech signal, the UBSS model
Procedia PDF Downloads 13319975 A Study of Behavioral Phenomena Using an Artificial Neural Network
Authors: Yudhajit Datta
Abstract:
Will is a phenomenon that has puzzled humanity for a long time. It is a belief that Will Power of an individual affects the success achieved by an individual in life. It is thought that a person endowed with great will power can overcome even the most crippling setbacks of life while a person with a weak will cannot make the most of life even the greatest assets. Behavioral aspects of the human experience such as will are rarely subjected to quantitative study owing to the numerous uncontrollable parameters involved. This work is an attempt to subject the phenomena of will to the test of an artificial neural network. The claim being tested is that will power of an individual largely determines success achieved in life. In the study, an attempt is made to incorporate the behavioral phenomenon of will into a computational model using data pertaining to the success of individuals obtained from an experiment. A neural network is to be trained using data based upon part of the model, and subsequently used to make predictions regarding will corresponding to data points of success. If the prediction is in agreement with the model values, the model is to be retained as a candidate. Ultimately, the best-fit model from among the many different candidates is to be selected, and used for studying the correlation between success and will.Keywords: will power, will, success, apathy factor, random factor, characteristic function, life story
Procedia PDF Downloads 37719974 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
Authors: Masood Otarod, Ronald M. Supkowski
Abstract:
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations
Procedia PDF Downloads 26819973 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 10619972 Flange/Web Distortional Buckling of Cold-Formed Steel Beams with Web Holes under Pure Bending
Authors: Nan-Ting Yu, Boksun Kim, Long-Yuan Li
Abstract:
The cold-formed steel beams with web holes are widely used as the load-carrying members in structural engineering. The perforations can release the space of the building and let the pipes go through. However, the perforated cold-formed steel (PCFS) beams may fail by distortional buckling more easily than beams with plain web; this is because the rotational stiffness from the web decreases. It is well known that the distortional buckling can be described as the buckling of the compressed flange-lip system. In fact, near the ultimate failure, the flange/web corner would move laterally, which indicates the bending of the web should be taken account. The purpose of this study is to give a specific solution for the critical stress of flange/web distortional buckling of PCFS beams. The new model is deduced based on classical energy method, and the deflection of the web is represented by the shape function of the plane beam element. The finite element analyses have been performed to validate the accuracy of the proposed model. The comparison of the critical stress calculated from Hancock's model, FEA, and present model, shows that the present model can provide a splendid prediction for the flange/web distortional buckling of PCFS beams.Keywords: cold-formed steel, beams, perforations, flange-web distortional buckling, finite element analysis
Procedia PDF Downloads 12819971 The Improvement of Environmental Protection through Motor Vehicle Noise Abatement
Authors: Z. Jovanovic, Z. Masonicic, S. Dragutinovic, Z. Sakota
Abstract:
In this paper, a methodology for noise reduction of motor vehicles in use is presented. The methodology relies on synergic model of noise generation as a function of time. The arbitrary number of motor vehicle noise sources act in concert yielding the generation of the overall noise level of motor vehicle thereafter. The number of noise sources participating in the overall noise level of motor vehicle is subjected to the constraint of the calculation of the acoustic potential of each noise source under consideration. It is the prerequisite condition for the calculation of the acoustic potential of the whole vehicle. The recast form of pertinent set of equations describing the synergic model is laid down and solved by dint of Gauss method. The bunch of results emerged and some of them i.e. those ensuing from model application to MDD FAP Priboj motor vehicle in use are particularly elucidated.Keywords: noise abatement, MV noise sources, noise source identification, muffler
Procedia PDF Downloads 443