Search results for: weighted overlay method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19417

Search results for: weighted overlay method

18727 A Family of Second Derivative Methods for Numerical Integration of Stiff Initial Value Problems in Ordinary Differential Equations

Authors: Luke Ukpebor, C. E. Abhulimen

Abstract:

Stiff initial value problems in ordinary differential equations are problems for which a typical solution is rapidly decaying exponentially, and their numerical investigations are very tedious. Conventional numerical integration solvers cannot cope effectively with stiff problems as they lack adequate stability characteristics. In this article, we developed a new family of four-step second derivative exponentially fitted method of order six for the numerical integration of stiff initial value problem of general first order differential equations. In deriving our method, we employed the idea of breaking down the general multi-derivative multistep method into predator and corrector schemes which possess free parameters that allow for automatic fitting into exponential functions. The stability analysis of the method was discussed and the method was implemented with numerical examples. The result shows that the method is A-stable and competes favorably with existing methods in terms of efficiency and accuracy.

Keywords: A-stable, exponentially fitted, four step, predator-corrector, second derivative, stiff initial value problems

Procedia PDF Downloads 258
18726 A New Method to Reduce 5G Application Layer Payload Size

Authors: Gui Yang Wu, Bo Wang, Xin Wang

Abstract:

Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.

Keywords: 5G, JSON, payload size, service-based interface

Procedia PDF Downloads 187
18725 Determination of Starting Design Parameters for Reactive-Dividing Wall Distillation Column Simulation Using a Modified Shortcut Design Method

Authors: Anthony P. Anies, Jose C. Muñoz

Abstract:

A new shortcut method for the design of reactive-dividing wall columns (RDWC) is proposed in this work. The RDWC is decomposed into its thermodynamically equivalent configuration naming the Petlyuk column, which consists of a reactive prefractionator and an unreactive main fractionator. The modified FUGK(Fenske-Underwood-Gilliland-Kirkbride) shortcut distillation method, which incorporates the effect of reaction on the Underwood equations and the Gilliland correlation, is used to design the reactive prefractionator. On the other hand, the conventional FUGK shortcut method is used to design the unreactive main fractionator. The shortcut method is applied to the synthesis of dimethyl ether (DME) through the liquid phase dehydration of methanol, and the results were used as the starting design inputs for rigorous simulation in Aspen Plus V8.8. A mole purity of 99 DME in the distillate stream, 99% methanol in the side draw stream, and 99% water in the bottoms stream were obtained in the simulation, thereby making the proposed shortcut method applicable for the preliminary design of RDWC.

Keywords: aspen plus, dimethyl ether, petlyuk column, reactive-dividing wall column, shortcut method, FUGK

Procedia PDF Downloads 194
18724 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 220
18723 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 241
18722 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 77
18721 Application of Optical Method Based on Laser Devise as Non-Destructive Testing for Calculus of Mechanical Deformation

Authors: R. Daïra, V. Chalvidan

Abstract:

We present the speckle interferometry method to determine the deformation of a piece. This method of holographic imaging using a CCD camera for simultaneous digital recording of two states object and reference. The reconstruction is obtained numerically. This latest method has the advantage of being simpler than the methods currently available, and it does not suffer the holographic configuration faults online. Furthermore, it is entirely digital and avoids heavy analysis after recording the hologram. This work was carried out in the laboratory HOLO 3 (optical metrology laboratory in Saint Louis, France) and it consists in controlling qualitatively and quantitatively the deformation of object by using a camera CCD connected to a computer equipped with software of Fringe Analysis.

Keywords: speckle, nondestructive testing, interferometry, image processing

Procedia PDF Downloads 497
18720 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach

Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani

Abstract:

This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.

Keywords: chaotic approach, phase space, Cao method, local linear approximation method

Procedia PDF Downloads 332
18719 Tumor Detection of Cerebral MRI by Multifractal Analysis

Authors: S. Oudjemia, F. Alim, S. Seddiki

Abstract:

This paper shows the application of multifractal analysis for additional help in cancer diagnosis. The medical image processing is a very important discipline in which many existing methods are in search of solutions to real problems of medicine. In this work, we present results of multifractal analysis of brain MRI images. The purpose of this analysis was to separate between healthy and cancerous tissue of the brain. A nonlinear method based on multifractal detrending moving average (MFDMA) which is a generalization of the detrending fluctuations analysis (DFA) is used for the detection of abnormalities in these images. The proposed method could make separation of the two types of brain tissue with success. It is very important to note that the choice of this non-linear method is due to the complexity and irregularity of tumor tissue that linear and classical nonlinear methods seem difficult to characterize completely. In order to show the performance of this method, we compared its results with those of the conventional method box-counting.

Keywords: irregularity, nonlinearity, MRI brain images, multifractal analysis, brain tumor

Procedia PDF Downloads 443
18718 Effect of Sex and Breed on Live Weight of Adult Iranian Pigeons

Authors: Sepehr Moradi, Mehdi Asadi Rad

Abstract:

This study is to evaluate the live weight of adult pigeons to investigate about their sex, race, their mutual effects and some auxiliary variables in 4 races of Kabood, Tizpar, Parvazy, and Namebar. In this paper, 152 pieces of pigeons as 76 male and female pairs with equal age are studied randomly. Then the birds were weighted by a scale with one gram precision. Software was used for statistical analysis. Mean live weight of adult male and female pigeons in 4 races (Kabood, Tizpar, Parvazy and Namebar with (15, 20, 20, 21) and (20, 21, 18, 17) records were, (530±56, 388.75±32, 392±34, 552±48) and (446±34, 342±32, 341±46, 457±57) gr, respectively. Difference weight of adult live of male with female was significant in 1% level (P < 0.01). Difference live weight of male adult pigeon was significant in 5% level (P < 0.05). Different live weight of female adult pigeon between Kabood, Parvazy and Tizpar races were significant in 5% level (P < 0.05) but mean live weight Kabood race with Namebar race and Parvazy with Tizpar were not significant. The results showed that most and least mean live weights belonged to Namebar of the male pigeon race and Parvazy of the female pigeon race.

Keywords: Iranian Native Pigeons, adult weight, live weight, adult pigeons

Procedia PDF Downloads 202
18717 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang

Abstract:

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision

Procedia PDF Downloads 161
18716 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 300
18715 Spectrophotometric Determination of Phenylephrine Hydrochloride by Coupling with Diazotized 2,4-Dinitroaniline

Authors: Sulaiman Gafar Muhamad

Abstract:

A rapid spectrophotometric method for the micro-determination of phenylephrine-HCl (PHE) has been developed. The proposed method involves the coupling of phenylephrine-HCl with diazotized 2,4-dinitroaniline in alkaline medium at λmax 455 nm. Under the present optimum condition, Beer’s law was obeyed in the range of 1.0-20 μg/ml of PHE with molar absorptivity of 1.915 ×104 l. mol-1.cm-1, with a relative error of 0.015 and a relative standard deviation of 0.024%. The current method has been applied successfully to estimate phenylephrine-HCl in pharmaceutical preparations (nose drop and syrup).

Keywords: diazo-coupling, 2, 4-dinitroaniline, phenylephrine-HCl, spectrophotometry

Procedia PDF Downloads 258
18714 Rational Probabilistic Method for Calculating Thermal Cracking Risk of Mass Concrete Structures

Authors: Naoyuki Sugihashi, Toshiharu Kishi

Abstract:

The probability of occurrence of thermal cracks in mass concrete in Japan is evaluated by the cracking probability diagram that represents the relationship between the thermal cracking index and the probability of occurrence of cracks in the actual structure. In this paper, we propose a method to directly calculate the cracking probability, following a probabilistic theory by modeling the variance of tensile stress and tensile strength. In this method, the relationship between the variance of tensile stress and tensile strength, the thermal cracking index, and the cracking probability are formulated and presented. In addition, standard deviation of tensile stress and tensile strength was identified, and the method of calculating cracking probability in a general construction controlled environment was also demonstrated.

Keywords: thermal crack control, mass concrete, thermal cracking probability, durability of concrete, calculating method of cracking probability

Procedia PDF Downloads 348
18713 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.

Keywords: apartment complex, big data, life-cycle building value analysis, machine learning

Procedia PDF Downloads 374
18712 A New Family of Integration Methods for Nonlinear Dynamic Analysis

Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran

Abstract:

A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.

Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy

Procedia PDF Downloads 641
18711 The Neuroscience Dimension of Juvenile Law Effectuates a Comprehensive Treatment of Youth in the Criminal System

Authors: Khushboo Shah

Abstract:

Categorical bans on the death penalty and life-without-parole sentences for juvenile offenders in a growing number of countries have established a new era in juvenile jurisprudence. This has been brought about by integration of the growing knowledge in cognitive neuroscience and appreciation of the inherent differences between adults and adolescents over the last ten years. This evolving understanding of being a child in the criminal system can be aptly reflected through policies that incorporate the mitigating traits of youth. First, the presentation will delineate the structures in cognitive neuroscience and in particular, focus on the prefrontal cortex, the amygdala, and the basal ganglia. These key anatomical structures in the brain are linked to three mitigating adolescent traits—an underdeveloped sense of responsibility, an increased vulnerability to negative influences, and transitory personality traits—that establish why juveniles have a lessened culpability. The discussion will delve into the details depicting how an underdeveloped prefrontal cortex results in the heightened emotional angst, high-energy and risky behavior characteristic of the adolescent time period or how the amygdala, the emotional center of the brain, governs different emotional expression resulting in why teens are susceptible to negative influences. Based on this greater understanding, it is incumbent that policies adequately reflect the adolescent physiology and psychology in the criminal system. However, it is important to ensure that these views are appropriately weighted while considering the jurisprudence for the treatment of children in the law. To ensure this balance is appropriately stricken, policies must incorporate the distinctive traits of youth in sentencing and legal considerations and yet refrain from the potential fallacies of absolving a juvenile offender of guilt and culpability. Accordingly, three policies will demonstrate how these results can be achieved: (1) eliminate housing of juvenile offenders in the adult prison system, (2) mandate fitness hearings for all transfers of juveniles to adult criminal court, and (3) use the post-disposition review as a type of rehabilitation method for juvenile offenders. Ultimately, this interdisciplinary approach of science and law allows for a better understanding of adolescent psychological and social functioning and can effectuate better legal outcomes for juveniles tried as adults.

Keywords: criminal law, Juvenile Justice, interdisciplinary, neuroscience

Procedia PDF Downloads 329
18710 A Physical Treatment Method as a Prevention Method for Barium Sulfate Scaling

Authors: M. A. Salman, G. Al-Nuwaibit, M. Safar, M. Rughaibi, A. Al-Mesri

Abstract:

Barium sulfate (BaSO₄) is a hard scaling usually precipitates on the surface of equipment in many industrial systems, as oil and gas production, desalination and cooling and boiler operation. It is a scale that extremely resistance to both chemical and mechanical cleaning. So, BaSO₄ is a problematic and expensive scaling. Although barium ions are present in most natural waters at a very low concentration as low as 0.008 mg/l, it could result of scaling problems in the presence of high concentration of sulfate ion or when mixing with incompatible waters as in oil produced water. The scaling potential of BaSO₄ using seawater at the intake of seven desalination plants in Kuwait, brine water and Kuwait oil produced water was calculated and compared then the best location in regards of barium sulfate scaling was reported. Finally, a physical treatment method (magnetic treatment method) and chemical treatment method were used to control BaSO₄ scaling using saturated solutions at different operating temperatures, flow velocities, feed pHs and different magnetic strengths. The results of the two methods were discussed, and the more economical one with the reasonable performance was recommended, which is the physical treatment method.

Keywords: magnetic field strength, flow velocity, retention time, barium sulfate

Procedia PDF Downloads 268
18709 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach

Authors: Imen Dhaou

Abstract:

This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.

Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization

Procedia PDF Downloads 256
18708 The Relationship between Risk and Capital: Evidence from Indian Commercial Banks

Authors: Seba Mohanty, Jitendra Mahakud

Abstract:

Capital ratio is one of the major indicators of the stability of the commercial banks. Pertinent to its pervasive importance, over the years the regulators, policy makers focus on the maintenance of the particular level of capital ratio to minimize the solvency and liquidation risk. In this context, it is very much important to identify the relationship between capital and risk and find out the factors which determine the capital ratios of commercial banks. The study examines the relationship between capital and risk of the commercial banks operating in India. Other bank specific variables like bank size, deposit, profitability, non-performing assets, bank liquidity, net interest margin, loan loss reserves, deposits variability and regulatory pressure are also considered for the analysis. The period of study is 1997-2015 i.e. the period of post liberalization. To identify the impact of financial crisis and implementation of Basel II on capital ratio, we have divided the whole period into two sub-periods i.e. 1997-2008 and 2008-2015. This study considers all the three types of commercial banks, i.e. public sector, the private sector and foreign banks, which have continuous data for the whole period. The main sources of data are Prowess data base maintained by centre for monitoring Indian economy (CMIE) and Reserve Bank of India publications. We use simultaneous equation model and more specifically Two Stage Least Square method to find out the relationship between capital and risk. From the econometric analysis, we find that capital and risk affect each other simultaneously, and this is consistent across the time period and across the type of banks. Moreover, regulation has a positive significant impact on the ratio of capital to risk-weighted assets, but no significant impact on the banks risk taking behaviour. Our empirical findings also suggest that size has a negative impact on capital and risk, indicating that larger banks increase their capital less than the other banks supported by the too-big-to-fail hypothesis. This study contributes to the existing body of literature by predicting a strong relationship between capital and risk in an emerging economy, where banking sector plays a majority role for financial development. Further this study may be considered as a primary study to find out the macro economic factors which affecting risk and capital in India.

Keywords: capital, commercial bank, risk, simultaneous equation model

Procedia PDF Downloads 328
18707 Estimation and Validation of Free Lime Analysis of Clinker by Quantitative Phase Analysis Using X ray diffraction

Authors: Suresh Palla, Kalpna Sharma, Gaurav Bhatnagar, S. K. Chaturvedi, B. N. Mohapatra

Abstract:

Determining the content of free lime is especially important to judge reactivity of the raw materials and clinker quality. The free lime limit isn’t the same for all cements; it depends on several factors, especially the temperature reached during the cooking and the grain size distribution in cement after grinding. Estimation of free lime by conventional method is influenced by the presence of portlandite and misleads the actual free lime content in the clinker for quality check up conditions. To ensure the product quality according to the standard specifications in terms of within the quality limits or not, a reliable, precise, and very reproducible method to quantify the relative phase abundances in the Portland Cement clinker and Portland Cements is to use X-ray diffraction (XRD) in combination with the Rietveld method. In the present study, a methodology was proposed using XRD to validate the obtained results of free lime by conventional method. The XRD and TG/DTA results confirm the presence of portlandite in the clinker to take the decision on the obtained free lime results through conventional method.

Keywords: free lime, quantitative phase analysis, conventional method, x ray diffraction

Procedia PDF Downloads 138
18706 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature

Authors: Chin-Yun Chen

Abstract:

Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.

Keywords: numerical quadrature, error estimation, derivative free method, interval computation

Procedia PDF Downloads 464
18705 A Concept Study to Assist Non-Profit Organizations to Better Target Developing Countries

Authors: Malek Makki

Abstract:

The main purpose of this research study is to assist non-profit organizations (NPOs) to better segment a group of least developing countries and to optimally target the most needier areas, so that the provided aids make positive and lasting differences. We applied international marketing and strategy approaches to segment a sub-group of candidates among a group of 151 countries identified by the UN-G77 list, and furthermore, we point out the areas of priorities. We use reliable and well known criteria on the basis of economics, geography, demography and behavioral. These criteria can be objectively estimated and updated so that a follow-up can be performed to measure the outcomes of any program. We selected 12 socio-economic criteria that complement each other: GDP per capita, GDP growth, industry value added, export per capita, fragile state index, corruption perceived index, environment protection index, ease of doing business index, global competitiveness index, Internet use, public spending on education, and employment rate. A weight was attributed to each variable to highlight the relative importance of each criterion within the country. Care was taken to collect the most recent available data from trusted well-known international organizations (IMF, WB, WEF, and WTO). Construct of equivalence was carried out to compare the same variables across countries. The combination of all these weighted estimated criteria provides us with a global index that represents the level of development per country. An absolute index that combines wars and risks was introduced to exclude or include a country on the basis of conflicts and a collapsing state. The final step applied to the included countries consists of a benchmarking method to select the segment of countries and the percentile of each criterion. The results of this study allowed us to exclude 16 countries for risks and security. We also excluded four countries because they lack reliable and complete data. The other countries were classified per percentile thru their global index, and we identified the needier and the areas where aids are highly required to help any NPO to prioritize the area of implementation. This new concept is based on defined, actionable, accessible and accurate variables by which NPO can implement their program and it can be extended to profit companies to perform their corporate social responsibility acts.

Keywords: developing countries, international marketing, non-profit organization, segmentation

Procedia PDF Downloads 305
18704 Comparison of Live Weight of Pure and Mixed Races Tizpar 30-Day Squabs

Authors: Sepehr Moradi, Mehdi Asadi Rad

Abstract:

The aim of this study is to evaluate and compare live weight of pure and mixed races Tizpar 30-day pigeons to investigate about their sex, race, and some auxiliary variables. In this paper, 70 pieces of pigeons as 35 male and female pairs with equal age are studied randomly. A natural incubation was done from each pair. All produced chickens were weighted at 30 days age before and after hunger by a scale with one gram precision. A covariance analysis was used since there were many auxiliary variables and unequal observations. SAS software was used for statistical analysis. Mean weight of live in pure race (Tizpar-Tizpar) with 12 records, 182.3±60.9 gr and mixed races of Tizpar-Kabood, Tizpar-Parvazy, Tizpar-Namebar, Kabood-Tizpar, Namebar-Tizpar, and Parvazy-Tizpar with 10, 10, 8, 6, 12, and 12 records were 114.3±71.6, 210.6±71.7, 353.2±86, 520.8±81.5, 288.3±65.6, and 382.6±70.4 gr, respectively. Effects of sex, race and some auxiliary variables were also significant in 1% level (P < 0.01). Difference live weight of 30-day of Tizpar-Tizpar race with Tizpar-Namebar and Parvazi-Tizpar mixed races was significant in 5% level (P < 0.05) and with Kabood-Tizpar mixed races was significant in 1% level (P < 0.01) but with Tizpar-Kabood, Nmaebar-Tizpar and Tizpar-Parvazy mixed races was not significant. The results showed that most and least weights of live belonged to Kabood-Tizpar and Tizpar-Kabood.

Keywords: squabs, Tizpar race, 30-day live weight, pigeons

Procedia PDF Downloads 178
18703 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution

Procedia PDF Downloads 332
18702 Experimental Investigation of Performance Anode Side of PEM Fuel Cell with Spin Method Coated with YSZ+SDC

Authors: Gürol Önal, Kevser Dinçer, Salih Yayla

Abstract:

In this study, performance of proton exchange membrane PEM fuel cell was experimentally investigated. Coating on the anode side of the PEM fuel cell was accomplished with the spin method by using YSZ+SDC. A solution having 0,1 gr YttriaStabilized Zirconia (YSZ) + 0,1 Samarium-Doped Ceria (SDC) + 10 mL methanol was prepared. This solution was taken out and filled into a micro-pipette. Then the anode side of PEM fuel cell was coated with YSZ+ SDC by using spin method. In the experimental study, current, voltage and power performances before and after coating were recorded and then compared to each other. It was found that the efficiency of PEM fuel cell increases after the coating with YSZ+SDC.

Keywords: fuel cell, Polymer Electrolyte Membrane (PEM), membrane, spin method

Procedia PDF Downloads 562
18701 Analysis of CO₂ Capture Products from Carbon Capture and Utilization Plant

Authors: Bongjae Lee, Beom Goo Hwang, Hye Mi Park

Abstract:

CO₂ capture products manufactured through Carbon Capture and Utilization (CCU) Plant that collect CO₂ directly from power plants require accurate measurements of the amount of CO₂ captured. For this purpose, two tests were carried out on the weight loss test. And one was analyzed using a carbon dioxide quantification device. First, the ignition loss analysis was performed by measuring the weight of the sample at 550°C after the first conversation and then confirming the loss when ignited at 950°C. Second, in the thermogravimetric analysis, the sample was divided into two sections of 40 to 500°C and 500 to 800°C to confirm the reduction. The results of thermal weight loss analysis and thermogravimetric analysis were confirmed to be almost similar. However, the temperature of the ignition loss analysis method was 950°C, which was 150°C higher than that of the thermogravimetric method at a temperature of 800°C, so that the difference in the amount of weight loss was 3 to 4% higher by the heat loss analysis method. In addition, the tendency that the CO₂ content increases as the reaction time become longer is similarly confirmed. Third, the results of the wet titration method through the carbon dioxide quantification device were found to be significantly lower than the weight loss method. Therefore, based on the results obtained through the above three analysis methods, we will establish a method to analyze the accurate amount of CO₂. Acknowledgements: This work was supported by the Korea Institute of Energy Technology Evaluation and planning (No. 20152010201850).

Keywords: carbon capture and utilization, CCU, CO2, CO2 capture products, analysis method

Procedia PDF Downloads 218
18700 The Proposal of Modification of California Pipe Method for Inclined Pipe

Authors: Wojciech Dąbrowski, Joanna Bąk, Laurent Solliec

Abstract:

Nowadays technical and technological progress and constant development of methods and devices applied to sanitary engineering is indispensable. Issues related to sanitary engineering involve flow measurements for water and wastewater. The precise measurement is very important and pivotal for further actions, like monitoring. There are many methods and techniques of flow measurement in the area of sanitary engineering. Weirs and flumes are well–known methods and common used. But also there are alternative methods. Some of them are very simple methods, others are solutions using high technique. The old–time method combined with new technique could be more useful than earlier. Paper describes substitute method of flow gauging (California pipe method) and proposal of modification of this method used for inclined pipe. Examination of possibility of improving and developing old–time methods is direction of the investigation.

Keywords: California pipe, sewerage, flow rate measurement, water, wastewater, improve, modification, hydraulic monitoring, stream

Procedia PDF Downloads 438
18699 Modeling and Tracking of Deformable Structures in Medical Images

Authors: Said Ettaieb, Kamel Hamrouni, Su Ruan

Abstract:

This paper presents a new method based both on Active Shape Model and a priori knowledge about the spatio-temporal shape variation for tracking deformable structures in medical imaging. The main idea is to exploit the a priori knowledge of shape that exists in ASM and introduce new knowledge about the shape variation over time. The aim is to define a new more stable method, allowing the reliable detection of structures whose shape changes considerably in time. This method can also be used for the three-dimensional segmentation by replacing the temporal component by the third spatial axis (z). The proposed method is applied for the functional and morphological study of the heart pump. The functional aspect was studied through temporal sequences of scintigraphic images and morphology was studied through MRI volumes. The obtained results are encouraging and show the performance of the proposed method.

Keywords: active shape model, a priori knowledge, spatiotemporal shape variation, deformable structures, medical images

Procedia PDF Downloads 343
18698 Comparison of Pbs/Zns Quantum Dots Synthesis Methods

Authors: Mahbobeh Bozhmehrani, Afshin Farah Bakhsh

Abstract:

Nanoparticles with PbS core of 12 nm and shell of approximately 3 nm were synthesized at PbS:ZnS ratios of 1.01:0.1 using Merca Ptopropionic Acid as stabilizing agent. PbS/ZnS nanoparticles present a dramatically increase of Photoluminescence intensity, confirming the confinement of the PbS core by increasing the Quantum Yield from 0.63 to 0.92 by the addition of the ZnS shell. In this case, the synthesis by microwave method allows obtaining nanoparticles with enhanced optical characteristics than those of nanoparticles synthesized by colloidal method.

Keywords: Pbs/Zns, quantum dots, colloidal method, microwave

Procedia PDF Downloads 287