Search results for: variable coefficient Jacobian elliptic function method
24664 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates
Authors: Yemane Hailu Fissuh, Zhongzhan Zhang
Abstract:
a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate
Procedia PDF Downloads 17524663 An Optimal Control Model to Determine Body Forces of Stokes Flow
Authors: Yuanhao Gao, Pin Lin, Kees Weijer
Abstract:
In this paper, we will determine the external body force distribution with analysis of stokes fluid motion using mathematical modelling and numerical approaching. The body force distribution is regarded as the unknown variable and could be determined by the idea of optimal control theory. The Stokes flow motion and its velocity are generated by given forces in a unit square domain. A regularized objective functional is built to match the numerical result of flow velocity with the generated velocity data. So that the force distribution could be determined by minimizing the value of objective functional, which is also the difference between the numerical and experimental velocity. Then after utilizing the Lagrange multiplier method, some partial differential equations are formulated consisting the optimal control system to solve. Finite element method and conjugate gradient method are used to discretize equations and deduce the iterative expression of target body force to compute the velocity numerically and body force distribution. Programming environment FreeFEM++ supports the implementation of this model.Keywords: optimal control model, Stokes equation, finite element method, conjugate gradient method
Procedia PDF Downloads 40424662 Descent Algorithms for Optimization Algorithms Using q-Derivative
Authors: Geetanjali Panda, Suvrakanti Chakraborty
Abstract:
In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems.Keywords: Descent algorithm, line search method, q calculus, Quasi Newton method
Procedia PDF Downloads 39624661 Education Function of Botanical Gardens
Authors: Ruhugül Özge Ocak, Banu Öztürk Kurtaslan
Abstract:
Botanical gardens are very significant organizations which protect the environment against the increasing environmental problems, provide environmental education for people, offer recreation possibilities, etc. This article describes botanical gardens and their functions. The most important function of a botanical garden is to provide environmental education for people and improve environmental awareness. Considering this function, some botanical gardens were examined and opinions were suggested about the subject.Keywords: botanical garden, environment, environmental education, recreation
Procedia PDF Downloads 52724660 Experimental Modelling Gear Contact with TE77 Energy Pulse Setup
Authors: Zainab Mohammed Shukur, Najlaa Ali Alboshmina, Ali Safa Alsaegh
Abstract:
The project was investigated tribological behavior of polyether ether ketone (PEEK1000) against PEEK1000 rolling sliding (non-conformal) configuration with slip ratio 83.3%, were tested applications using a TE77 wear mechanisms and friction coefficient test rig. Under marginal lubrication conditions and the absence of film thick conditions, load 100 N was used to simulate the torque in gears 7 N.m. The friction coefficient and wear mechanisms of PEEK were studied under reciprocating roll/slide conditions with water, ethylene glycol, silicone, and base oil. Tribological tests were conducted on a TE77 high-frequency tribometer, with a disc-on-plate slide/roll (the energy pulse criterion) configuration. An Alicona G5 optical 3D micro-coordinate measurement microscope was used to investigate the surface topography and wear mechanisms. The surface roughness had been a significant effect on the friction coefficient for the PEEK/PEEK the rolling sliding contact test ethylene glycol and on the wear mechanisms. When silicone, ethylene glycol, and oil were used as a lubricant, the steady state of friction coefficient was reached faster than the other lubricant. Results describe the effect of the film thick with slip ratio of 83.3% on the tribological performance.Keywords: polymer, rolling- sliding, energy pulse, gear contact
Procedia PDF Downloads 14024659 Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images
Authors: Haoqi Gao, Koichi Ogawara
Abstract:
Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method.Keywords: retinal vascular segmentations, generative ad-versarial network, cyclegan, fundus images
Procedia PDF Downloads 14224658 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance
Authors: Loai AbdAllah, Mahmoud Kaiyal
Abstract:
Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.Keywords: missing values, incomplete data, distance, incomplete diabetes data
Procedia PDF Downloads 22424657 Numerical Investigation of the Jacketing Method of Reinforced Concrete Column
Authors: S. Boukais, A. Nekmouche, N. Khelil, A. Kezmane
Abstract:
The first intent of this study is to develop a finite element model that can predict correctly the behavior of the reinforced concrete column. Second aim is to use the finite element model to investigate and evaluate the effect of the strengthening method by jacketing of the reinforced concrete column, by considering different interface contact between the old and the new concrete. Four models were evaluated, one by considering perfect contact, the other three models by using friction coefficient of 0.1, 0.3 and 0.5. The simulation was carried out by using Abaqus software. The obtained results show that the jacketing reinforcement led to significant increase of the global performance of the behavior of the simulated reinforced concrete column.Keywords: strengthening, jacketing, rienforced concrete column, Abaqus, simulation
Procedia PDF Downloads 14424656 Feasibility Study of Wind Energy Potential in Turkey: Case Study of Catalca District in Istanbul
Authors: Mohammed Wadi, Bedri Kekezoglu, Mustafa Baysal, Mehmet Rida Tur, Abdulfetah Shobole
Abstract:
This paper investigates the technical evaluation of the wind potential for present and future investments in Turkey taking into account the feasibility of sites, installments, operation, and maintenance. This evaluation based on the hourly measured wind speed data for the three years 2008–2010 at 30 m height for Çatalca district. These data were obtained from national meteorology station in Istanbul–Republic of Turkey are analyzed in order to evaluate the feasibility of wind power potential and to assure supreme assortment of wind turbines installing for the area of interest. Furthermore, the data are extrapolated and analyzed at 60 m and 80 m regarding the variability of roughness factor. Weibull bi-parameter probability function is used to approximate monthly and annually wind potential and power density based on three calculation methods namely, the approximated, the graphical and the energy pattern factor methods. The annual mean wind power densities were to be 400.31, 540.08 and 611.02 W/m² for 30, 60, and 80 m heights respectively. Simulation results prove that the analyzed area is an appropriate place for constructing large-scale wind farms.Keywords: wind potential in Turkey, Weibull bi-parameter probability function, the approximated method, the graphical method, the energy pattern factor method, capacity factor
Procedia PDF Downloads 25724655 Characteristing Aquifer Layers of Karstic Springs in Nahavand Plain Using Geoelectrical and Electromagnetic Methods
Authors: A. Taheri Tizro, Rojin Fasihi
Abstract:
Geoelectrical method is one of the most effective tools in determining subsurface lithological layers. The electromagnetic method is also a newer method that can play an important role in determining and separating subsurface layers with acceptable accuracy. In the present research, 10 electromagnetic soundings were collected in the upstream of 5 karstic springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood in Nahavand plain of Hamadan province. By using the emerging data, the belectromagnetic logs were prepared at different depths and compared with 5 logs of the geoelectric method. The comparison showed that the value of NRMSE in the geoelectric method for the 5 springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood were 7.11, 7.50, respectively. It is 44.93, 3.99, and 2.99, and in the electromagnetic method, the value of this coefficient for the investigated springs is about 1.4, 1.1, 1.2, 1.5, and 1.3, respectively. In addition to the similarity of the results of the two methods, it is found that, the accuracy of the electromagnetic method based on the NRMSE value is higher than the geoelectric method. The advantage of the electromagnetic method compared to geoelectric is on less time consuming and its cost prohibitive. The depth to water table is the final result of this research work , which showed that in the springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood, having depth of about 6, 20, 10, 2 36 meters respectively. The maximum thickness of the aquifer layer was estimated in Gonbad kabood spring (36 meters) and the lowest in Gian spring (2 meters). These results can be used to identify the water potential of the region in order to better manage water resources.Keywords: karst spring, geoelectric, aquifer layers, nahavand
Procedia PDF Downloads 6824654 Practice and Understanding of Fracturing Renovation for Risk Exploration Wells in Xujiahe Formation Tight Sandstone Gas Reservoir
Authors: Fengxia Li, Lufeng Zhang, Haibo Wang
Abstract:
The tight sandstone gas reservoir in the Xujiahe Formation of the Sichuan Basin has huge reserves, but its utilization rate is low. Fracturing and stimulation are indispensable technologies to unlock their potential and achieve commercial exploitation. Slickwater is the most widely used fracturing fluid system in the fracturing and renovation of tight reservoirs. However, its viscosity is low, its sand-carrying performance is poor, and the risk of sand blockage is high. Increasing the sand carrying capacity by increasing the displacement will increase the frictional resistance of the pipe string, affecting the resistance reduction performance. The variable viscosity slickwater can flexibly switch between different viscosities in real-time online, effectively overcoming problems such as sand carrying and resistance reduction. Based on a self-developed indoor loop friction testing system, a visualization device for proppant transport, and a HAAKE MARS III rheometer, a comprehensive evaluation was conducted on the performance of variable viscosity slickwater, including resistance reduction, rheology, and sand carrying. The indoor experimental results show that: 1. by changing the concentration of drag-reducing agents, the viscosity of the slippery water can be changed between 2~30mPa. s; 2. the drag reduction rate of the variable viscosity slickwater is above 80%, and the shear rate will not reduce the drag reduction rate of the liquid; under indoor experimental conditions, 15mPa. s of variable viscosity and slickwater can basically achieve effective carrying and uniform placement of proppant. The layered fracturing effect of the JiangX well in the dense sandstone of the Xujiahe Formation shows that the drag reduction rate of the variable viscosity slickwater is 80.42%, and the daily production of the single layer after fracturing is over 50000 cubic meters. This study provides theoretical support and on-site experience for promoting the application of variable viscosity slickwater in tight sandstone gas reservoirs.Keywords: slickwater, hydraulic fracturing, dynamic sand laying, drag reduction rate, rheological properties
Procedia PDF Downloads 7324653 Method of Successive Approximations for Modeling of Distributed Systems
Authors: A. Torokhti
Abstract:
A new method of mathematical modeling of the distributed nonlinear system is developed. The system is represented by a combination of the set of spatially distributed sensors and the fusion center. Its mathematical model is obtained from the iterative procedure that converges to the model which is optimal in the sense of minimizing an associated cost function.Keywords: mathematical modeling, non-linear system, spatially distributed sensors, fusion center
Procedia PDF Downloads 37924652 Residual Life Estimation Based on Multi-Phase Nonlinear Wiener Process
Authors: Hao Chen, Bo Guo, Ping Jiang
Abstract:
Residual life (RL) estimation based on multi-phase nonlinear Wiener process was studied in this paper, which is significant for complicated products with small samples. Firstly, nonlinear Wiener model with random parameter was introduced and multi-phase nonlinear Wiener model was proposed to model degradation process of products that were nonlinear and separated into different phases. Then the multi-phase RL probability density function based on the presented model was derived approximately in a closed form and parameters estimation was achieved with the method of maximum likelihood estimation (MLE). Finally, the method was applied to estimate the RL of high voltage plus capacitor. Compared with the other three different models by log-likelihood function (Log-LF) and Akaike information criterion (AIC), the results show that the proposed degradation model can capture degradation process of high voltage plus capacitors in a better way and provide a more reliable result.Keywords: multi-phase nonlinear wiener process, residual life estimation, maximum likelihood estimation, high voltage plus capacitor
Procedia PDF Downloads 45224651 Integral Form Solutions of the Linearized Navier-Stokes Equations without Deviatoric Stress Tensor Term in the Forward Modeling for FWI
Authors: Anyeres N. Atehortua Jimenez, J. David Lambraño, Juan Carlos Muñoz
Abstract:
Navier-Stokes equations (NSE), which describe the dynamics of a fluid, have an important application on modeling waves used for data inversion techniques as full waveform inversion (FWI). In this work a linearized version of NSE and its variables, neglecting deviatoric terms of stress tensor, is presented. In order to get a theoretical modeling of pressure p(x,t) and wave velocity profile c(x,t), a wave equation of visco-acoustic medium (VAE) is written. A change of variables p(x,t)=q(x,t)h(ρ), is made on the equation for the VAE leading to a well known Klein-Gordon equation (KGE) describing waves propagating in variable density medium (ρ) with dispersive term α^2(x). KGE is reduced to a Poisson equation and solved by proposing a specific function for α^2(x) accounting for the energy dissipation and dispersion. Finally, an integral form solution is derived for p(x,t), c(x,t) and kinematics variables like particle velocity v(x,t), displacement u(x,t) and bulk modulus function k_b(x,t). Further, it is compared this visco-acoustic formulation with another form broadly used in the geophysics; it is argued that this formalism is more general and, given its integral form, it may offer several advantages from the modern parallel computing point of view. Applications to minimize the errors in modeling for FWI applied to oils resources in geophysics are discussed.Keywords: Navier-Stokes equations, modeling, visco-acoustic, inversion FWI
Procedia PDF Downloads 51824650 Stability Indicating Method Development and Validation for Estimation of Antiasthmatic Drug in Combined Dosages Formed by RP-HPLC
Authors: Laxman H. Surwase, Lalit V. Sonawane, Bhagwat N. Poul
Abstract:
A simple stability indicating high performance liquid chromatographic method has been developed for the simultaneous determination of Levosalbutamol Sulphate and Ipratropium Bromide in bulk and pharmaceutical dosage form using reverse phase Zorbax Eclipse Plus C8 column (250mm×4.6mm), with mobile phase phosphate buffer (0.05M KH2PO4): acetonitrile (55:45v/v) pH 3.5 adjusted with ortho-phosphoric acid, the flow rate was 1.0 mL/min and the detection was carried at 212 nm. The retention times of Levosalbutamol Sulphate and Ipratropium Bromide were 2.2007 and 2.6611 min respectively. The correlation coefficient of Levosalbutamol Sulphate and Ipratropium Bromide was found to be 0.997 and 0.998.Calibration plots were linear over the concentration ranges 10-100µg/mL for both Levosalbutamol Sulphate and Ipratropium Bromide. The LOD and LOQ of Levosalbutamol Sulphate were 2.520µg/mL and 7.638µg/mL while for Ipratropium Bromide was 1.201µg/mL and 3.640 µg/mL. The accuracy of the proposed method was determined by recovery studies and found to be 100.15% for Levosalbutamol Sulphate and 100.19% for Ipratropium Bromide respectively. The method was validated for accuracy, linearity, sensitivity, precision, robustness, system suitability. The proposed method could be utilized for routine analysis of Levosalbutamol Sulphate and Ipratropium Bromide in bulk and pharmaceutical capsule dosage form.Keywords: levosalbutamol sulphate, ipratropium bromide, RP-HPLC, phosphate buffer, acetonitrile
Procedia PDF Downloads 34924649 Group Consensus of Hesitant Fuzzy Linguistic Variables for Decision-Making Problem
Authors: Chen T. Chen, Hui L. Cheng
Abstract:
Due to the different knowledge, experience and expertise of experts, they usually provide the different opinions in the group decision-making process. Therefore, it is an important issue to reach the group consensus of opinions of experts in group multiple-criteria decision-making (GMCDM) process. Because the subjective opinions of experts always are fuzziness and uncertainties, it is difficult to use crisp values to describe the real opinions of experts or decision-makers. It is reasonable for experts to use the linguistic variables to express their opinions. The hesitant fuzzy set are extended from the concept of fuzzy sets. Experts use the hesitant fuzzy sets can be flexible to describe their subjective opinions. In order to aggregate the hesitant fuzzy linguistic variables of all experts effectively, an adjustment method based on distance function will be presented in this paper. Based on the opinions adjustment method, this paper will present an effective approach to adjust the hesitant fuzzy linguistic variables of all experts to reach the group consensus. Then, a new hesitant linguistic GMCDM method will be presented based on the group consensus of hesitant fuzzy linguistic variables. Finally, an example will be implemented to illustrate the computational process to enhance the practical value of the proposed model.Keywords: group multi-criteria decision-making, linguistic variables, hesitant fuzzy linguistic variables, distance function, group consensus
Procedia PDF Downloads 15424648 Temperature Coefficients of the Refractive Index for Ge Film
Authors: Lingmao Xu, Hui Zhou
Abstract:
Ge film is widely used in infrared optical systems. Because of the special requirements of space application, it is usually used in low temperature. The refractive index of Ge film is always changed with the temperature which has a great effect on the manufacture of high precision infrared optical film. Specimens of Ge single film were deposited at ZnSe substrates by EB-PVD method. During temperature range 80K ~ 300K, the transmittance of Ge single film within 2 ~ 15 μm were measured every 20K by PerkinElmer FTIR cryogenic testing system. By the full spectrum inversion method fitting, the relationship between refractive index and wavelength within 2 ~ 12μm at different temperatures was received. It can be seen the relationship consistent with the formula Cauchy, which can be fitted. Then the relationship between refractive index of the Ge film and temperature/wavelength was obtained by fitting method based on formula Cauchy. Finally, the designed value obtained by the formula and the measured spectrum were compared to verify the accuracy of the formula.Keywords: infrared optical film, low temperature, thermal refractive coefficient, Ge film
Procedia PDF Downloads 29524647 Variable Mapping: From Bibliometrics to Implications
Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek
Abstract:
Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.Keywords: bibliometrics, literature review, science mapping, variable mapping
Procedia PDF Downloads 11924646 An Investigation on Designing and Enhancing the Performance of H-Darrieus Wind Turbine of 10KW at the Medium Range of Wind Speed in Vietnam
Authors: Ich Long Ngo, Dinh Tai Dang, Ngoc Tu Nguyen, Minh Duc Nguyen
Abstract:
This paper describes an investigation on designing and enhancing the performance of H-Darrieus wind turbine (HDWT) of 10kW at the medium wind speed. The aerodynamic characteristics of this turbine were investigated by both theoretical and numerical approaches. The optimal design procedure was first proposed to enhance the power coefficient under various effects, such as airfoil type, number of blades, solidity, aspect ratio, and tip speed ratio. As a result, the overall design of the 10kW HDWT was well achieved, and the power characteristic of this turbine was found by numerical approach. Additionally, the maximum power coefficient predicted is up to 0.41 at the tip speed ratio of 3.7 and wind speed of 8 m/s. Particularly, a generalized correlation of power coefficient with tip speed ratio and wind speed is first proposed. These results obtained are very useful for enhancing the performance of the HDWTs placed in a country with high wind power potential like Vietnam.Keywords: computational fluid dynamics, double multiple stream tube, h-darrieus wind turbine, renewable energy
Procedia PDF Downloads 11624645 Image Reconstruction Method Based on L0 Norm
Authors: Jianhong Xiang, Hao Xiang, Linyu Wang
Abstract:
Compressed sensing (CS) has a wide range of applications in sparse signal reconstruction. Aiming at the problems of low recovery accuracy and long reconstruction time of existing reconstruction algorithms in medical imaging, this paper proposes a corrected smoothing L0 algorithm based on compressed sensing (CSL0). First, an approximate hyperbolic tangent function (AHTF) that is more similar to the L0 norm is proposed to approximate the L0 norm. Secondly, in view of the "sawtooth phenomenon" in the steepest descent method and the problem of sensitivity to the initial value selection in the modified Newton method, the use of the steepest descent method and the modified Newton method are jointly optimized to improve the reconstruction accuracy. Finally, the CSL0 algorithm is simulated on various images. The results show that the algorithm proposed in this paper improves the reconstruction accuracy of the test image by 0-0. 98dB.Keywords: smoothed L0, compressed sensing, image processing, sparse reconstruction
Procedia PDF Downloads 11324644 Data Driven Infrastructure Planning for Offshore Wind farms
Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree
Abstract:
The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data
Procedia PDF Downloads 8624643 Trace Analysis of Genotoxic Impurity Pyridine in Sitagliptin Drug Material Using UHPLC-MS
Authors: Bashar Al-Sabti, Jehad Harbali
Abstract:
Background: Pyridine is a reactive base that might be used in preparing sitagliptin. International Agency for Research on Cancer classifies pyridine in group 2B; this classification means that pyridine is possibly carcinogenic to humans. Therefore, pyridine should be monitored at the allowed limit in sitagliptin pharmaceutical ingredients. Objective: The aim of this study was to develop a novel ultra high performance liquid chromatography mass spectrometry (UHPLC-MS) method to estimate the quantity of pyridine impurity in sitagliptin pharmaceutical ingredients. Methods: The separation was performed on C8 shim-pack (150 mm X 4.6 mm, 5 µm) in reversed phase mode using a mobile phase of water-methanol-acetonitrile containing 4 mM ammonium acetate in gradient mode. Pyridine was detected by mass spectrometer using selected ionization monitoring mode at m/z = 80. The flow rate of the method was 0.75 mL/min. Results: The method showed excellent sensitivity with a quantitation limit of 1.5 ppm of pyridine relative to sitagliptin. The linearity of the method was excellent at the range of 1.5-22.5 ppm with a correlation coefficient of 0.9996. Recoveries values were between 93.59-103.55%. Conclusions: The results showed good linearity, precision, accuracy, sensitivity, selectivity, and robustness. The studied method was applied to test three batches of sitagliptin raw materials. Highlights: This method is useful for monitoring pyridine in sitagliptin during its synthesis and testing sitagliptin raw materials before using them in the production of pharmaceutical products.Keywords: genotoxic impurity, pyridine, sitagliptin, UHPLC -MS
Procedia PDF Downloads 9324642 Designing of Nano-materials for Waste Heat Conversion into Electrical Energy Thermoelectric generator
Authors: Wiqar Hussain Shah
Abstract:
The electrical and thermal properties of the doped Tellurium Telluride (Tl10Te6) chalcogenide nano-particles are mainly characterized by a competition between metallic (hole doped concentration) and semi-conducting state. We have studied the effects of Sn doping on the electrical and thermoelectric properties of Tl10-xSnxTe6 (1.00 ≤x≤ 2.00), nano-particles, prepared by solid state reactions in sealed silica tubes and ball milling method. Structurally, all these compounds were found to be phase pure as confirmed by the x-rays diffractometery (XRD) and energy dispersive X-ray spectroscopy (EDS) analysis. Additionally crystal structure data were used to model the data and support the findings. The particles size was calculated from the XRD data by Scherrer’s formula. The EDS was used for an elemental analysis of the sample and declares the percentage of elements present in the system. The thermo-power or Seebeck co-efficient (S) was measured for all these compounds which show that S increases with increasing temperature from 295 to 550 K. The Seebeck coefficient is positive for the whole temperature range, showing p-type semiconductor characteristics. The electrical conductivity was investigated by four probe resistivity techniques revealed that the electrical conductivity decreases with increasing temperature, and also simultaneously with increasing Sn concentration. While for Seebeck coefficient the trend is opposite which is increases with increasing temperature. These increasing behavior of Seebeck coefficient leads to high power factor which are increases with increasing temperature and Sn concentration except For Tl8Sn2Te6 because of lowest electrical conductivity but its power factor increases well with increasing temperature.Keywords: Sn doping in Tellurium Telluride nano-materials, electron holes competition, Seebeck co-efficient, effects of Sn doping on Electrical conductivity, effects on Power factor
Procedia PDF Downloads 4324641 Cluster Analysis of Students’ Learning Satisfaction
Authors: Purevdolgor Luvsantseren, Ajnai Luvsan-Ish, Oyuntsetseg Sandag, Javzmaa Tsend, Akhit Tileubai, Baasandorj Chilhaasuren, Jargalbat Puntsagdash, Galbadrakh Chuluunbaatar
Abstract:
One of the indicators of the quality of university services is student satisfaction. Aim: We aimed to study the level of satisfaction of students in the first year of premedical courses in the course of Medical Physics using the cluster method. Materials and Methods: In the framework of this goal, a questionnaire was collected from a total of 324 students who studied the medical physics course of the 1st course of the premedical course at the Mongolian National University of Medical Sciences. When determining the level of satisfaction, the answers were obtained on five levels of satisfaction: "excellent", "good", "medium", "bad" and "very bad". A total of 39 questionnaires were collected from students: 8 for course evaluation, 19 for teacher evaluation, and 12 for student evaluation. From the research, a database with 39 fields and 324 records was created. Results: In this database, cluster analysis was performed in MATLAB and R programs using the k-means method of data mining. Calculated the Hopkins statistic in the created database, the values are 0.88, 0.87, and 0.97. This shows that cluster analysis methods can be used. The course evaluation sub-fund is divided into three clusters. Among them, cluster I has 150 objects with a "good" rating of 46.2%, cluster II has 119 objects with a "medium" rating of 36.7%, and Cluster III has 54 objects with a "good" rating of 16.6%. The teacher evaluation sub-base into three clusters, there are 179 objects with a "good" rating of 55.2% in cluster II, 108 objects with an "average" rating of 33.3% in cluster III, and 36 objects with an "excellent" rating in cluster I of 11.1%. The sub-base of student evaluations is divided into two clusters: cluster II has 215 objects with an "excellent" rating of 66.3%, and cluster I has 108 objects with an "excellent" rating of 33.3%. Evaluating the resulting clusters with the Silhouette coefficient, 0.32 for the course evaluation cluster, 0.31 for the teacher evaluation cluster, and 0.30 for student evaluation show statistical significance. Conclusion: Finally, to conclude, cluster analysis in the model of the medical physics lesson “good” - 46.2%, “middle” - 36.7%, “bad” - 16.6%; 55.2% - “good”, 33.3% - “middle”, 11.1% - “bad” in the teacher evaluation model; 66.3% - “good” and 33.3% of “bad” in the student evaluation model.Keywords: questionnaire, data mining, k-means method, silhouette coefficient
Procedia PDF Downloads 4824640 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 6624639 Electrochemical Properties of Bimetallic Silver-Platinum Core-Shell Nanoparticles
Authors: Fredrick O. Okumu, Mangaka C. Matoetoe
Abstract:
Silver-platinum (Ag-Pt) bimetallic nanoparticles (NPs) with varying mole fractions (1:1, 1:3 and 3:1) were prepared by co-reduction of hexachloroplatinate and silver nitrate with sodium citrate. Upon successful formation of both monometallic and bimetallic (BM) core shell nanoparticles, cyclic voltammetry (CV) was used to characterize the NPs. The drop coated nanofilms on the GC substrate showed characteristic peaks of monometallic Ag NPs; Ag+/Ag0 redox couple as well as the Pt NPs; hydrogen adsorption and desorption peaks. These characteristic peaks were confirmed in the bimetallic NPs voltammograms. The following varying current trends were observed in the BM NPs ratios; GCE/Ag-Pt 1:3 > GCE/Ag-Pt 3:1 > GCE/Ag-Pt 1:1. Fundamental electrochemical properties which directly or indirectly affects the applicability of films such as; diffusion coefficient (D), electroactive surface coverage, electrochemical band gap, electron transfer coefficient (α) and charge (Q) were assessed using Randles - Sevcik plot and Laviron’s equations . High charge and surface coverage was observed in GCE/Ag-Pt 1:3 which supports its enhanced current. GCE/Ag-Pt 3:1 showed high diffusion coefficient while GCE/Ag-Pt 1:1 possessed high electron transfer coefficient that is facilitated by its high apparent heterogeneous rate constant relative to other BM NPs ratios. Surface redox reaction was determined as adsorption controlled in all modified GCEs. Surface coverage is inversely proportional to size; therefore the surface coverage data suggests that Ag-Pt 1:1 NPs have a small particle size. Generally, GCE/Ag-Pt 1:3 depicts the best electrochemical properties.Keywords: characterization, core-shell, electrochemical, nanoparticles
Procedia PDF Downloads 26724638 Wear Behavior of Grey Cast Iron Coated with Al2O3-13TiO2 and Ni20Cr Using Detonation Spray Process
Authors: Harjot Singh Gill, Neelkanth Grover, Jwala Parshad Singla
Abstract:
The main aim of this research work is to present the effect of coating on two different grades of grey cast iron using detonation spray method. Ni20Cr and Al2O3-13TiO2 powders were sprayed using detonation gun onto GI250 and GIHC substrates and the results as well as coating surface morphology of the coating is studied by XRD and SEM/EDAX analysis. The wear resistance of Ni20Cr and Al2O3-13TiO2 has been investigated on pin-on-disc tribometer using ASTM G99 standards. Cumulative wear rate and coefficient of friction (µ) were calculated under three normal load of 30N, 40N, 50N at constant sliding velocity of 1m/s. Worn out surfaces were analyzed by SEM/EDAX. The results show significant resistance to wear with Al2O3-13TiO2 coating as compared to Ni20Cr and bare substrates. SEM/EDAX analysis and cumulative wear loss bar charts clearly explain the wear behavior of coated as well as bare sample of GI250 and GIHC.Keywords: detonation spray, grey cast iron, wear rate, coefficient of friction
Procedia PDF Downloads 36524637 Mixed Integer Programing for Multi-Tier Rebate with Discontinuous Cost Function
Authors: Y. Long, L. Liu, K. V. Branin
Abstract:
One challenge faced by procurement decision-maker during the acquisition process is how to compare similar products from different suppliers and allocate orders among different products or services. This work focuses on allocating orders among multiple suppliers considering rebate. The objective function is to minimize the total acquisition cost including purchasing cost and rebate benefit. Rebate benefit is complex and difficult to estimate at the ordering step. Rebate rules vary for different suppliers and usually change over time. In this work, we developed a system to collect the rebate policies, standardized the rebate policies and developed two-stage optimization models for ordering allocation. Rebate policy with multi-tiers is considered in modeling. The discontinuous cost function of rebate benefit is formulated for different scenarios. A piecewise linear function is used to approximate the discontinuous cost function of rebate benefit. And a Mixed Integer Programing (MIP) model is built for order allocation problem with multi-tier rebate. A case study is presented and it shows that our optimization model can reduce the total acquisition cost by considering rebate rules.Keywords: discontinuous cost function, mixed integer programming, optimization, procurement, rebate
Procedia PDF Downloads 25724636 Scheduling Method for Electric Heater in HEMS considering User’s Comfort
Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim
Abstract:
Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater
Procedia PDF Downloads 58224635 Development of a Direct Immunoassay for Human Ferritin Using Diffraction-Based Sensing Method
Authors: Joel Ballesteros, Harriet Jane Caleja, Florian Del Mundo, Cherrie Pascual
Abstract:
Diffraction-based sensing was utilized in the quantification of human ferritin in blood serum to provide an alternative to label-based immunoassays currently used in clinical diagnostics and researches. The diffraction intensity was measured by the diffractive optics technology or dotLab™ system. Two methods were evaluated in this study: direct immunoassay and direct sandwich immunoassay. In the direct immunoassay, human ferritin was captured by human ferritin antibodies immobilized on an avidin-coated sensor while the direct sandwich immunoassay had an additional step for the binding of a detector human ferritin antibody on the analyte complex. Both methods were repeatable with coefficient of variation values below 15%. The direct sandwich immunoassay had a linear response from 10 to 500 ng/mL which is wider than the 100-500 ng/mL of the direct immunoassay. The direct sandwich immunoassay also has a higher calibration sensitivity with value 0.002 Diffractive Intensity (ng mL-1)-1) compared to the 0.004 Diffractive Intensity (ng mL-1)-1 of the direct immunoassay. The limit of detection and limit of quantification values of the direct immunoassay were found to be 29 ng/mL and 98 ng/mL, respectively, while the direct sandwich immunoassay has a limit of detection (LOD) of 2.5 ng/mL and a limit of quantification (LOQ) of 8.2 ng/mL. In terms of accuracy, the direct immunoassay had a percent recovery of 88.8-93.0% in PBS while the direct sandwich immunoassay had 94.1 to 97.2%. Based on the results, the direct sandwich immunoassay is a better diffraction-based immunoassay in terms of accuracy, LOD, LOQ, linear range, and sensitivity. The direct sandwich immunoassay was utilized in the determination of human ferritin in blood serum and the results are validated by Chemiluminescent Magnetic Immunoassay (CMIA). The calculated Pearson correlation coefficient was 0.995 and the p-values of the paired-sample t-test were less than 0.5 which show that the results of the direct sandwich immunoassay was comparable to that of CMIA and could be utilized as an alternative analytical method.Keywords: biosensor, diffraction, ferritin, immunoassay
Procedia PDF Downloads 352