Search results for: inverse method
18392 Development of an Optimization Method for Myoelectric Signal Processing by Active Matrix Sensing in Robot Rehabilitation
Authors: Noriyoshi Yamauchi, Etsuo Horikawa, Takunori Tsuji
Abstract:
Training by exoskeleton robot is drawing attention as a rehabilitation method for body paralysis seen in many cases, and there are many forms that assist with the myoelectric signal generated by exercise commands from the brain. Rehabilitation requires more frequent training, but it is one of the reasons that the technology is required for the identification of the myoelectric potential derivation site and attachment of the device is preventing the spread of paralysis. In this research, we focus on improving the efficiency of gait training by exoskeleton type robots, improvement of myoelectric acquisition and analysis method using active matrix sensing method, and improvement of walking rehabilitation and walking by optimization of robot control.Keywords: active matrix sensing, brain machine interface (BMI), the central pattern generator (CPG), myoelectric signal processing, robot rehabilitation
Procedia PDF Downloads 38518391 Bounded Solution Method for Geometric Programming Problem with Varying Parameters
Authors: Abdullah Ali H. Ahmadini, Firoz Ahmad, Intekhab Alam
Abstract:
Geometric programming problem (GPP) is a well-known non-linear optimization problem having a wide range of applications in many engineering problems. The structure of GPP is quite dynamic and easily fit to the various decision-making processes. The aim of this paper is to highlight the bounded solution method for GPP with special reference to variation among right-hand side parameters. Thus this paper is taken the advantage of two-level mathematical programming problems and determines the solution of the objective function in a specified interval called lower and upper bounds. The beauty of the proposed bounded solution method is that it does not require sensitivity analyses of the obtained optimal solution. The value of the objective function is directly calculated under varying parameters. To show the validity and applicability of the proposed method, a numerical example is presented. The system reliability optimization problem is also illustrated and found that the value of the objective function lies between the range of lower and upper bounds, respectively. At last, conclusions and future research are depicted based on the discussed work.Keywords: varying parameters, geometric programming problem, bounded solution method, system reliability optimization
Procedia PDF Downloads 13318390 Digital Image Steganography with Multilayer Security
Authors: Amar Partap Singh Pharwaha, Balkrishan Jindal
Abstract:
In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising.Keywords: Pythagorean theorem, pixel adjustment, ciphered data, image hiding, least significant bit, flexible matrix
Procedia PDF Downloads 33718389 Optimizing of Machining Parameters of Plastic Material Using Taguchi Method
Authors: Jumazulhisham Abdul Shukor, Mohd. Sazali Said, Roshanizah Harun, Shuib Husin, Ahmad Razlee Ab Kadir
Abstract:
This paper applies Taguchi Optimization Method in determining the best machining parameters for pocket milling process on Polypropylene (PP) using CNC milling machine where the surface roughness is considered and the Carbide inserts cutting tool are used. Three machining parameters; speed, feed rate and depth of cut are investigated along three levels; low, medium and high of each parameter (Taguchi Orthogonal Arrays). The setting of machining parameters were determined by using Taguchi Method and the Signal-to-Noise (S/N) ratio are assessed to define the optimal levels and to predict the effect of surface roughness with assigned parameters based on L9. The final experimental outcomes are presented to prove the optimization parameters recommended by manufacturer are accurate.Keywords: inserts, milling process, signal-to-noise (S/N) ratio, surface roughness, Taguchi Optimization Method
Procedia PDF Downloads 63718388 4-Chlorophenol Degradation in Water Using TIO₂-X%ZnS Synthesized by One-Step Sol-Gel Method
Authors: M. E. Velásquez Torres, F. Tzompantzi, J. C. Castillo-Rodríguez, A. G. Romero Villegas, S. Mendéz-Salazar, C. E. Santolalla-Vargas, J. Cardoso-Martínez
Abstract:
Photocatalytic degradation, as an advanced oxidation technology, is a promising method in organic pollutant degradation. In this sense, chlorophenols should be removed from the water because they are highly toxic. The TiO₂ - X% ZnS photocatalysts, where X represents the molar percentage of ZnS (3%, 5%, 10%, and 15%), were synthesized using the one-step sol-gel method to use them as photocatalysts to degrade 4-chlorophenol. The photocatalysts were synthesized by a one-step sol-gel method. They were refluxed for 36 hours, dried at 80°C, and calcined at 400°C. They were labeled TiO₂ - X%ZnS, where X represents the molar percentage of ZnS (3%, 5%, 10%, and 15%). The band gap was calculated using a Cary 100 UV-Visible Spectrometer with an integrating sphere accessory. Ban gap value of each photocatalyst was: 2.7 eV of TiO₂, 2.8 eV of TiO₂ - 3%ZnS and TiO₂ - 5%ZnS, 2.9 eV of TiO₂ - 10%ZnS and 2.6 eV of TiO2 - 15%ZnS. In a batch type reactor, under the irradiation of a mercury lamp (λ = 254 nm, Pen-Ray), degradations of 55 ppm 4-chlorophenol were obtained at 360 minutes with the synthesized photocatalysts: 60% (3% ZnS), 66% (5% ZnS), 74% (10% ZnS) and 58% (15% ZnS). In this sense, the best material as a photocatalyst was TiO₂ -10%ZnS with a degradation percentage of 74%.Keywords: 4-chlorophenol, photocatalysis, water pollutant, sol-gel
Procedia PDF Downloads 13118387 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default
Procedia PDF Downloads 45618386 Earnings Volatility and Earnings Predictability
Authors: Yosra Ben Mhamed
Abstract:
Most previous research that investigates the importance of earnings volatility for a firm’s value has focused on the effects of earnings volatility on the cost of capital. Many study illustrate that earnings volatility can reduce the firm’s value by enhancing the cost of capital. However, a few recent studies directly examine the relation between earnings volatility and subsequent earnings levels. In our study, we further explore the role of volatility in forecasting. Our study makes two primary contributions to the literature. First, taking into account the level of current firm’s performance, we provide causal theory to the link between volatility and earnings predictability. Nevertheless, previous studies testing the linearity of this relationship have not mentioned any underlying theory. Secondly, our study contributes to the vast body of fundamental analysis research that identifies a set of variables that improve valuation, by showing that earnings volatility affects the estimation of future earnings. Projections of earnings are used by valuation research and practice to derive estimates of firm value. Since we want to examine the impact of volatility on earnings predictability, we sort the sample into three portfolios according to the level of their earnings volatility in ascending order. For each quintile, we present the predictability coefficient. In a second test, each of these portfolios is, then, sorted into three further quintiles based on their level of current earnings. These yield nine quintiles. So we can observe whether volatility strongly predicts decreases on earnings predictability only for highest quintile of earnings. In general, we find that earnings volatility has an inverse relationship with earnings predictability. Our results also show that the sensibility of earnings predictability to ex-ante volatility is more pronounced among profitability firms. The findings are most consistent with overinvestment and persistence explanations.Keywords: earnings volatility, earnings predictability, earnings persistence, current profitability
Procedia PDF Downloads 43318385 Modelling the Indonesian Goverment Securities Yield Curve Using Nelson-Siegel-Svensson and Support Vector Regression
Authors: Jamilatuzzahro, Rezzy Eko Caraka
Abstract:
The yield curve is the plot of the yield to maturity of zero-coupon bonds against maturity. In practice, the yield curve is not observed but must be extracted from observed bond prices for a set of (usually) incomplete maturities. There exist many methodologies and theory to analyze of yield curve. We use two methods (the Nelson-Siegel Method, the Svensson Method, and the SVR method) in order to construct and compare our zero-coupon yield curves. The objectives of this research were: (i) to study the adequacy of NSS model and SVR to Indonesian government bonds data, (ii) to choose the best optimization or estimation method for NSS model and SVR. To obtain that objective, this research was done by the following steps: data preparation, cleaning or filtering data, modeling, and model evaluation.Keywords: support vector regression, Nelson-Siegel-Svensson, yield curve, Indonesian government
Procedia PDF Downloads 24418384 Compassion Fade: Effects of Mass Perception and Intertemporal Choice on Non-Volunteering Behavior
Authors: Mariel L. Alonzo, Patricia Mae T. Chi, Juliana Patrice P. Mayormita, Sanjana A. Sorio
Abstract:
Compassion fade proposes an inverse relationship between the magnitude of stimuli to elicited compassion. This phenomenon is viewed within a framework that integrates a 3-Act Compassion structure with Latané and Darley’s Unresponsive Bystander Model and Prospect Theory of Decision-making under risk. Students (N=211) from Ateneo de Davao were sampled to examine the effects of mass perception (increasing number of needy persons) and intertemporal choice (soon versus later) on volunteering behavior. Collegiate classes in their natural setting were randomly assigned to five different treatment groups and were presented with audiovisual presentations featuring an increasing number of needy persons. The students were deceived to believe that two hypothetical feeding programs for Marawi refugees, taking place in 1 month and 6 months, were in need of volunteers for its preparatory phase. Results show a statistically significant (p=0.000; p=0.013) non-linear trend consistently for both feeding programs. There was a decrease in volunteered time means as identifiable victims increased from 0-47 and an increase as it progressed towards 267 non-identifiable victims. Highest interest was expressed for the 0 needy people shown and least for 47. The 0 hours volunteered was consistently the mode and median in all treatments. There was no statistically significant temporal discounting effect.Keywords: compassion, group perception, identifiable victim, intertemporal choice, prosocial behavior, unresponsive bystander
Procedia PDF Downloads 20818383 A Runge Kutta Discontinuous Galerkin Method for Lagrangian Compressible Euler Equations in Two-Dimensions
Authors: Xijun Yu, Zhenzhen Li, Zupeng Jia
Abstract:
This paper presents a new cell-centered Lagrangian scheme for two-dimensional compressible flow. The new scheme uses a semi-Lagrangian form of the Euler equations. The system of equations is discretized by Discontinuous Galerkin (DG) method using the Taylor basis in Eulerian space. The vertex velocities and the numerical fluxes through the cell interfaces are computed consistently by a nodal solver. The mesh moves with the fluid flow. The time marching is implemented by a class of the Runge-Kutta (RK) methods. A WENO reconstruction is used as a limiter for the RKDG method. The scheme is conservative for the mass, momentum and total energy. The scheme maintains second-order accuracy and has free parameters. Results of some numerical tests are presented to demonstrate the accuracy and the robustness of the scheme.Keywords: cell-centered Lagrangian scheme, compressible Euler equations, RKDG method
Procedia PDF Downloads 54618382 Contractor Selection by Using Analytical Network Process
Authors: Badr A. Al-Jehani
Abstract:
Nowadays, contractor selection is a critical activity of the project owner. Selecting the right contractor is essential to the project manager for the success of the project, and this cab happens by using the proper selecting method. Traditionally, the contractor is being selected based on his offered bid price. This approach focuses only on the price factor and forgetting other essential factors for the success of the project. In this research paper, the Analytic Network Process (ANP) method is used as a decision tool model to select the most appropriate contractor. This decision-making method can help the clients who work in the construction industry to identify contractors who are capable of delivering satisfactory outcomes. Moreover, this research paper provides a case study of selecting the proper contractor among three contractors by using ANP method. The case study identifies and computes the relative weight of the eight criteria and eleven sub-criteria using a questionnaire.Keywords: contractor selection, project management, decision-making, bidding
Procedia PDF Downloads 8818381 Preparation of Nanophotonics LiNbO3 Thin Films and Studying Their Morphological and Structural Properties by Sol-Gel Method for Waveguide Applications
Authors: A. Fakhri Makram, Marwa S. Alwazni, Al-Douri Yarub, Evan T. Salim, Hashim Uda, Chin C. Woei
Abstract:
Lithium niobate (LiNbO3) nanostructures are prepared on quartz substrate by the sol-gel method. They have been deposited with different molarity concentration and annealed at 500°C. These samples are characterized and analyzed by X-ray diffraction (XRD), Scanning Electron Microscope (SEM) and Atomic Force Microscopy (AFM). The measured results showed an importance increasing in molarity concentrations that indicate the structure starts to become crystal, regular, homogeneous, well crystal distributed, which made it more suitable for optical waveguide application.Keywords: lithium niobate, morphological properties, thin film, pechini method, XRD
Procedia PDF Downloads 44618380 Multi-Fidelity Fluid-Structure Interaction Analysis of a Membrane Wing
Authors: M. Saeedi, R. Wuchner, K.-U. Bletzinger
Abstract:
In order to study the aerodynamic performance of a semi-flexible membrane wing, Fluid-Structure Interaction simulations have been performed. The fluid problem has been modeled using two different approaches which are the numerical solution of the Navier-Stokes equations and the vortex panel method. Nonlinear analysis of the structural problem is performed using the Finite Element Method. Comparison between the two fluid solvers has been made. Aerodynamic performance of the wing is discussed regarding its lift and drag coefficients and they are compared with those of the equivalent rigid wing.Keywords: CFD, FSI, Membrane wing, Vortex panel method
Procedia PDF Downloads 48618379 Application of Unmanned Aerial Vehicle in Urban Rail Transit Intelligent Inspection
Authors: Xinglu Nie, Feifei Tang, Chuntao Wei, Zhimin Ruan, Qianhong Zhu
Abstract:
Current method of manual-style inspection can not fully meet the requirement of the urban rail transit security in China. In this paper, an intelligent inspection method using unmanned aerial vehicle (UAV) is utilized. A series of orthophoto of rail transit monitored area was collected by UAV, image correction and registration were operated among multi-phase images, then the change detection was used to detect the changes, judging the engineering activities and human activities that may become potential threats to the security of urban rail. Not only qualitative judgment, but also quantitative judgment of changes in the security control area can be provided by this method, which improves the objectives and efficiency of the patrol results. The No.6 line of Chongqing Municipality was taken as an example to verify the validation of this method.Keywords: rail transit, control of protected areas, intelligent inspection, UAV, change detection
Procedia PDF Downloads 37018378 Correlation between Overweightness and the Extent of Coronary Atherosclerosis among the South Caspian Population
Authors: Maryam Nabati, Mahmood Moosazadeh, Ehsan Soroosh, Hanieh Shiraj, Mahnaneh Gholami, Ali Ghaemian
Abstract:
Background: Reported effects of obesity on the extent of angiographic coronary artery disease(CAD) have beeninconsistent. The present study aimed to investigate the relationships between the indices of obesity and otheranthropometric markers with the extent of CAD. Methods: This study was conducted on 1008 consecutive patients who underwent coronary angiography. Bodymass index (BMI), waist circumference (WC), waist-to-hip ratio (WHR), and waist-to-height ratio (WHtR) wereseparately calculated for each patient. Extent, severity, and complexity of CAD were determined by the Gensini andSYNTAX scores. Results: According to the results, there was a significant inverse correlation between the SYNTAX score with BMI(r = − 0.110; P < 0.001), WC (r = − 0.074; P = 0.018), and WHtR (r = − 0.089; P = 0.005). Furthermore, a significant inversecorrelation was observed between the Gensini score with BMI (r = − 0.090; P = 0.004) and WHtR (r = − 0.065; P =0.041). However, the results of multivariate linear regression analysis did not show any association between theSYNTAX and Gensini scores with the indices of obesity and overweight. On the other hand, the patients with anunhealthy WC had a higher prevalence of diabetes mellitus (DM) (P = 0.004) and hypertension (HTN) (P < 0.001) compared to the patients with healthy values. Coexistence of HTN and DM was more prevalent in subjects with anunhealthy WC and WHR compared to that in those with healthy values (P = 0.002 and P = 0.032, respectively). Conclusion: It seems that the anthropometric indices of obesity are not the predictors of the angiographic severityof CAD. However, they are associated with an increased risk of cardiovascular risk factors and higher risk profile.Keywords: body mass index, BMI, coronary artery disease, waist circumference
Procedia PDF Downloads 14018377 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection
Authors: Mondher Yahyaoui
Abstract:
A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.Keywords: aileron deflection, camber-surface-bound vortices, classical VLM, generalized VLM, flap deflection
Procedia PDF Downloads 43518376 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 48918375 Bridging Stress Modeling of Composite Materials Reinforced by Fiber Using Discrete Element Method
Authors: Chong Wang, Kellem M. Soares, Luis E. Kosteski
Abstract:
The problem of toughening in brittle materials reinforced by fibers is complex, involving all the mechanical properties of fibers, matrix, the fiber/matrix interface, as well as the geometry of the fiber. An appropriate method applicable to the simulation and analysis of toughening is essential. In this work, we performed simulations and analysis of toughening in brittle matrix reinforced by randomly distributed fibers by means of the discrete elements method. At first, we put forward a mechanical model of the contribution of random fibers to the toughening of composite. Then with numerical programming, we investigated the stress, damage and bridging force in the composite material when a crack appeared in the brittle matrix. From the results obtained, we conclude that: (i) fibers with high strength and low elasticity modulus benefit toughening; (ii) fibers with relatively high elastic modulus compared to the matrix may result in considerable matrix damage (spalling effect); (iii) employment of high-strength synthetic fiber is a good option. The present work makes it possible to optimize the parameters in order to produce advanced ceramic with desired performance. We believe combination of the discrete element method (DEM) with the finite element method (FEM) can increase the versatility and efficiency of the software developed.Keywords: bridging stress, discrete element method, fiber reinforced composites, toughening
Procedia PDF Downloads 44518374 AI In Health and Wellbeing - A Seven-Step Engineering Method
Authors: Denis Özdemir, Max Senges
Abstract:
There are many examples of AI-supported apps for better health and wellbeing. Generally, these applications help people to achieve their goals based on scientific research and input data. Still, they do not always explain how those three are related, e.g. by making implicit assumptions about goals that hold for many but not for all. We present a seven-step method for designing health and wellbeing AIs considering goal setting, measurable results, real-time indicators, analytics, visual representations, communication, and feedback. It can help engineers as guidance in developing apps, recommendation algorithms, and interfaces that support humans in their decision-making without patronization. To illustrate the method, we create a recommender AI for tiny wellbeing habits and run a small case study, including a survey. From the results, we infer how people perceive the relationship between them and the AI and to what extent it helps them to achieve their goals. We review our seven-step engineering method and suggest modifications for the next iteration.Keywords: recommender systems, natural language processing, health apps, engineering methods
Procedia PDF Downloads 16518373 Using Artificial Intelligence Method to Explore the Important Factors in the Reuse of Telecare by the Elderly
Authors: Jui-Chen Huang
Abstract:
This research used artificial intelligence method to explore elderly’s opinions on the reuse of telecare, its effect on their service quality, satisfaction and the relationship between customer perceived value and intention to reuse. This study conducted a questionnaire survey on the elderly. A total of 124 valid copies of a questionnaire were obtained. It adopted Backpropagation Network (BPN) to propose an effective and feasible analysis method, which is different from the traditional method. Two third of the total samples (82 samples) were taken as the training data, and the one third of the samples (42 samples) were taken as the testing data. The training and testing data RMSE (root mean square error) are 0.022 and 0.009 in the BPN, respectively. As shown, the errors are acceptable. On the other hand, the training and testing data RMSE are 0.100 and 0.099 in the regression model, respectively. In addition, the results showed the service quality has the greatest effects on the intention to reuse, followed by the satisfaction, and perceived value. This result of the Backpropagation Network method is better than the regression analysis. This result can be used as a reference for future research.Keywords: artificial intelligence, backpropagation network (BPN), elderly, reuse, telecare
Procedia PDF Downloads 21218372 Computing Customer Lifetime Value in E-Commerce Websites with Regard to Returned Orders and Payment Method
Authors: Morteza Giti
Abstract:
As online shopping is becoming increasingly popular, computing customer lifetime value for better knowing the customers is also gaining more importance. Two distinct factors that can affect the value of a customer in the context of online shopping is the number of returned orders and payment method. Returned orders are those which have been shipped but not collected by the customer and are returned to the store. Payment method refers to the way that customers choose to pay for the price of the order which are usually two: Pre-pay and Cash-on-delivery. In this paper, a novel model called RFMSP is presented to calculated the customer lifetime value, taking these two parameters into account. The RFMSP model is based on the common RFM model while adding two extra parameter. The S represents the order status and the P indicates the payment method. As a case study for this model, the purchase history of customers in an online shop is used to compute the customer lifetime value over a period of twenty months.Keywords: RFMSP model, AHP, customer lifetime value, k-means clustering, e-commerce
Procedia PDF Downloads 32018371 Solvent Extraction and Spectrophotometric Determination of Palladium(II) Using P-Methylphenyl Thiourea as a Complexing Agent
Authors: Shashikant R. Kuchekar, Somnath D. Bhumkar, Haribhau R. Aher, Bhaskar H. Zaware, Ponnadurai Ramasami
Abstract:
A precise, sensitive, rapid and selective method for the solvent extraction, spectrophotometric determination of palladium(II) using para-methylphenyl thiourea (PMPT) as an extractant is developed. Palladium(II) forms yellow colored complex with PMPT which shows an absorption maximum at 300 nm. The colored complex obeys Beer’s law up to 7.0 µg ml-1 of palladium. The molar absorptivity and Sandell’s sensitivity were found to be 8.486 x 103 l mol-1cm-1 and 0.0125 μg cm-2 respectively. The optimum conditions for the extraction and determination of palladium have been established by monitoring the various experimental parameters. The precision of the method has been evaluated and the relative standard deviation has been found to be less than 0.53%. The proposed method is free from interference from large number of foreign ions. The method has been successfully applied for the determination of palladium from alloy, synthetic mixtures corresponding to alloy samples.Keywords: solvent extraction, PMPT, Palladium (II), spectrophotometry
Procedia PDF Downloads 46118370 Trace Analysis of Genotoxic Impurity Pyridine in Sitagliptin Drug Material Using UHPLC-MS
Authors: Bashar Al-Sabti, Jehad Harbali
Abstract:
Background: Pyridine is a reactive base that might be used in preparing sitagliptin. International Agency for Research on Cancer classifies pyridine in group 2B; this classification means that pyridine is possibly carcinogenic to humans. Therefore, pyridine should be monitored at the allowed limit in sitagliptin pharmaceutical ingredients. Objective: The aim of this study was to develop a novel ultra high performance liquid chromatography mass spectrometry (UHPLC-MS) method to estimate the quantity of pyridine impurity in sitagliptin pharmaceutical ingredients. Methods: The separation was performed on C8 shim-pack (150 mm X 4.6 mm, 5 µm) in reversed phase mode using a mobile phase of water-methanol-acetonitrile containing 4 mM ammonium acetate in gradient mode. Pyridine was detected by mass spectrometer using selected ionization monitoring mode at m/z = 80. The flow rate of the method was 0.75 mL/min. Results: The method showed excellent sensitivity with a quantitation limit of 1.5 ppm of pyridine relative to sitagliptin. The linearity of the method was excellent at the range of 1.5-22.5 ppm with a correlation coefficient of 0.9996. Recoveries values were between 93.59-103.55%. Conclusions: The results showed good linearity, precision, accuracy, sensitivity, selectivity, and robustness. The studied method was applied to test three batches of sitagliptin raw materials. Highlights: This method is useful for monitoring pyridine in sitagliptin during its synthesis and testing sitagliptin raw materials before using them in the production of pharmaceutical products.Keywords: genotoxic impurity, pyridine, sitagliptin, UHPLC -MS
Procedia PDF Downloads 9518369 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior
Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang
Abstract:
Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method
Procedia PDF Downloads 31318368 A Multi-Family Offline SPE LC-MS/MS Analytical Method for Anionic, Cationic and Non-ionic Surfactants in Surface Water
Authors: Laure Wiest, Barbara Giroud, Azziz Assoumani, Francois Lestremau, Emmanuelle Vulliet
Abstract:
Due to their production at high tonnages and their extensive use, surfactants are contaminants among those determined at the highest concentrations in wastewater. However, analytical methods and data regarding their occurrence in river water are scarce and concern only a few families, mainly anionic surfactants. The objective of this study was to develop an analytical method to extract and analyze a wide variety of surfactants in a minimum of steps, with a sensitivity compatible with the detection of ultra-traces in surface waters. 27 substances, from 12 families of surfactants, anionic, cationic and non-ionic were selected for method optimization. Different retention mechanisms for the extraction by solid phase extraction (SPE) were tested and compared in order to improve their detection by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The best results were finally obtained with a C18 grafted silica LC column and a polymer cartridge with hydrophilic lipophilic balance (HLB), and the method developed allows the extraction of the three types of surfactants with satisfactory recoveries. The final analytical method comprised only one extraction and two LC injections. It was validated and applied for the quantification of surfactants in 36 river samples. The method's limits of quantification (LQ), intra- and inter-day precision and accuracy were evaluated, and good performances were obtained for the 27 substances. As these compounds have many areas of application, contaminations of instrument and method blanks were observed and considered for the determination of LQ. Nevertheless, with LQ between 15 and 485 ng/L, and accuracy of over 80%, this method was suitable for monitoring surfactants in surface waters. Application on French river samples revealed the presence of anionic, cationic and non-ionic surfactants with median concentrations ranging from 24 ng/L for octylphenol ethoxylates (OPEO) to 4.6 µg/L for linear alkylbenzenesulfonates (LAS). The analytical method developed in this work will therefore be useful for future monitoring of surfactants in waters. Moreover, this method, which shows good performances for anionic, non-ionic and cationic surfactants, may be easily adapted to other surfactants.Keywords: anionic surfactant, cationic surfactant, LC-MS/MS, non-ionic surfactant, SPE, surface water
Procedia PDF Downloads 14518367 Tax Treaties between Developed and Developing Countries: Withholding Taxes and Treaty Heterogeneity Content
Authors: Pranvera Shehaj
Abstract:
Unlike any prior analysis on the withholding tax rates negotiated in tax treaties, this study looks at the treaty heterogeneity content, by investigating the impact of the residence country’s double tax relief method and of tax-sparing agreements, on the difference between developing countries’ domestic withholding taxes on dividends on one side, and treaty negotiated withholding taxes at source on portfolio dividends on the other side. Using a dyadic panel dataset of asymmetric double tax treaties between 2005 and 2019, this study suggests first that the difference between domestic and negotiated WHTs on portfolio dividends is higher when the OECD member uses the credit method, as compared to when it uses the exemption method. Second, results suggest that the inclusion of tax-sparing provisions vanishes the positive effect of the credit method at home on the difference between domestic and negotiated WHTs on portfolio dividends, incentivizing developing countries to negotiate higher withholding taxes.Keywords: double tax treaties, asymmetric investments, withholding tax, dividends, double tax relief method, tax sparing
Procedia PDF Downloads 6218366 Deciphering the Gut Microbiome's Role in Early-Life Immune Development
Authors: Xia Huo
Abstract:
Children are more vulnerable to environmental toxicants compared to adults, and their developing immune system is among the most sensitive targets regarding toxicity of environmental toxicants. Studies have found that exposure to environmental toxicants is associated with impaired immune function in children, but only a few studies have focused on the relationship between environmental toxicant exposure and vaccine antibody potency and immunoglobulin (Ig) levels in children. These studies investigated the associations of exposure to polychlorinated biphenyls (PCBs), perfluorinated compounds (PFCs), heavy metals (Pb, Cd, As, Hg) and PM2.5 with the serum-specific antibody concentrations and Ig levels against different vaccines, such as anti-Hib, tetanus, diphtheria toxoid, and analyze the possible mechanisms underlying exposure-related alterations of antibody titers and Ig levels against different vaccines. Results suggest that exposure to these toxicants is generally associated with decreased potency of antibodies produced from childhood immunizations and an overall deficiency in the protection the vaccines provide. Toxicant exposure is associated with vaccination failure and decreased antibody titers, and increased risk of immune-related diseases in children by altering specific immunoglobulin levels. Age, sex, nutritional status, and co-exposure may influence the effects of toxicants on the immune function in children. Epidemiological evidence suggests that exposure-induced changes to humoral immunerelated tissue/cells/molecules response to vaccines may have predominant roles in the inverse associations between antibody responsiveness to vaccines and environmental toxicants. These results help us to conduct better immunization policies for children under environmental toxicant burden.Keywords: environmental toxicants, immunotoxicity, vaccination, antibodies, children's health
Procedia PDF Downloads 5918365 Superconvergence of the Iterated Discrete Legendre Galerkin Method for Fredholm-Hammerstein Equations
Authors: Payel Das, Gnaneshwar Nelakanti
Abstract:
In this paper we analyse the iterated discrete Legendre Galerkin method for Fredholm-Hammerstein integral equations with smooth kernel. Using sufficiently accurate numerical quadrature rule, we obtain superconvergence rates for the iterated discrete Legendre Galerkin solutions in both infinity and $L^2$-norm. Numerical examples are given to illustrate the theoretical results.Keywords: hammerstein integral equations, spectral method, discrete galerkin, numerical quadrature, superconvergence
Procedia PDF Downloads 47018364 Numerical Solution of Two-Dimensional Solute Transport System Using Operational Matrices
Authors: Shubham Jaiswal
Abstract:
In this study, the numerical solution of two-dimensional solute transport system in a homogeneous porous medium of finite-length is obtained. The considered transport system have the terms accounting for advection, dispersion and first-order decay with first-type boundary conditions. Initially, the aquifer is considered solute free and a constant input-concentration is considered at inlet boundary. The solution is describing the solute concentration in rectangular inflow-region of the homogeneous porous media. The numerical solution is derived using a powerful method viz., spectral collocation method. The numerical computation and graphical presentations exhibit that the method is effective and reliable during solution of the physical model with complicated boundary conditions even in the presence of reaction term.Keywords: two-dimensional solute transport system, spectral collocation method, Chebyshev polynomials, Chebyshev differentiation matrix
Procedia PDF Downloads 23218363 Identification of Bayesian Network with Convolutional Neural Network
Authors: Mohamed Raouf Benmakrelouf, Wafa Karouche, Joseph Rynkiewicz
Abstract:
In this paper, we propose an alternative method to construct a Bayesian Network (BN). This method relies on a convolutional neural network (CNN classifier), which determinates the edges of the network skeleton. We train a CNN on a normalized empirical probability density distribution (NEPDF) for predicting causal interactions and relationships. We have to find the optimal Bayesian network structure for causal inference. Indeed, we are undertaking a search for pair-wise causality, depending on considered causal assumptions. In order to avoid unreasonable causal structure, we consider a blacklist and a whitelist of causality senses. We tested the method on real data to assess the influence of education on the voting intention for the extreme right-wing party. We show that, with this method, we get a safer causal structure of variables (Bayesian Network) and make to identify a variable that satisfies the backdoor criterion.Keywords: Bayesian network, structure learning, optimal search, convolutional neural network, causal inference
Procedia PDF Downloads 176