Search results for: weighted rank regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4024

Search results for: weighted rank regression

3724 Development of Generalized Correlation for Liquid Thermal Conductivity of N-Alkane and Olefin

Authors: A. Ishag Mohamed, A. A. Rabah

Abstract:

The objective of this research is to develop a generalized correlation for the prediction of thermal conductivity of n-Alkanes and Alkenes. There is a minority of research and lack of correlation for thermal conductivity of liquids in the open literature. The available experimental data are collected covering the groups of n-Alkanes and Alkenes.The data were assumed to correlate to temperature using Filippov correlation. Nonparametric regression of Grace Algorithm was used to develop the generalized correlation model. A spread sheet program based on Microsoft Excel was used to plot and calculate the value of the coefficients. The results obtained were compared with the data that found in Perry's Chemical Engineering Hand Book. The experimental data correlated to the temperature ranged "between" 273.15 to 673.15 K, with R2 = 0.99.The developed correlation reproduced experimental data that which were not included in regression with absolute average percent deviation (AAPD) of less than 7 %. Thus the spread sheet was quite accurate which produces reliable data.

Keywords: N-Alkanes, N-Alkenes, nonparametric, regression

Procedia PDF Downloads 646
3723 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 79
3722 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 445
3721 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 227
3720 An Alternative Framework of Multi-Resolution Nested Weighted Essentially Non-Oscillatory Schemes for Solving Euler Equations with Adaptive Order

Authors: Zhenming Wang, Jun Zhu, Yuchen Yang, Ning Zhao

Abstract:

In the present paper, an alternative framework is proposed to construct a class of finite difference multi-resolution nested weighted essentially non-oscillatory (WENO) schemes with an increasingly higher order of accuracy for solving inviscid Euler equations. These WENO schemes firstly obtain a set of reconstruction polynomials by a hierarchy of nested central spatial stencils, and then recursively achieve a higher order approximation through the lower-order precision WENO schemes. The linear weights of such WENO schemes can be set as any positive numbers with a requirement that their sum equals one and they will not pollute the optimal order of accuracy in smooth regions and could simultaneously suppress spurious oscillations near discontinuities. Numerical results obtained indicate that these alternative finite-difference multi-resolution nested WENO schemes with different accuracies are very robust with low dissipation and use as few reconstruction stencils as possible while maintaining the same efficiency, achieving the high-resolution property without any equivalent multi-resolution representation. Besides, its finite volume form is easier to implement in unstructured grids.

Keywords: finite-difference, WENO schemes, high order, inviscid Euler equations, multi-resolution

Procedia PDF Downloads 131
3719 Response Surface Methodology for the Optimization of Paddy Husker by Medium Brown Rice Peeling Machine 6 Rubber Type

Authors: S. Bangphan, P. Bangphan, C. Ketsombun, T. Sammana

Abstract:

Optimization of response surface methodology (RSM) was employed to study the effects of three factor (rubber of clearance, spindle of speed, and rice of moisture) in brown rice peeling machine of the optimal good rice yield (99.67, average of three repeats). The optimized composition derived from RSM regression was analyzed using Regression analysis and Analysis of Variance (ANOVA). At a significant level α=0.05, the values of Regression coefficient, R2 adjust were 96.55% and standard deviation were 1.05056. The independent variables are initial rubber of clearance, spindle of speed and rice of moisture parameters namely. The investigating responses are final rubber clearance, spindle of speed and moisture of rice.

Keywords: brown rice, response surface methodology (RSM), peeling machine, optimization, paddy husker

Procedia PDF Downloads 560
3718 Diffusion Magnetic Resonance Imaging and Magnetic Resonance Spectroscopy in Detecting Malignancy in Maxillofacial Lesions

Authors: Mohamed Khalifa Zayet, Salma Belal Eiid, Mushira Mohamed Dahaba

Abstract:

Introduction: Malignant tumors may not be easily detected by traditional radiographic techniques especially in an anatomically complex area like maxillofacial region. At the same time, the advent of biological functional MRI was a significant footstep in the diagnostic imaging field. Objective: The purpose of this study was to define the malignant metabolic profile of maxillofacial lesions using diffusion MRI and magnetic resonance spectroscopy, as adjunctive aids for diagnosing of such lesions. Subjects and Methods: Twenty-one patients with twenty-two lesions were enrolled in this study. Both morphological and functional MRI scans were performed, where T1, T2 weighted images, diffusion-weighted MRI with four apparent diffusion coefficient (ADC) maps were constructed for analysis, and magnetic resonance spectroscopy with qualitative and semi-quantitative analyses of choline and lactate peaks were applied. Then, all patients underwent incisional or excisional biopsies within two weeks from MR scans. Results: Statistical analysis revealed that not all the parameters had the same diagnostic performance, where lactate had the highest areas under the curve (AUC) of 0.9 and choline was the lowest with insignificant diagnostic value. The best cut-off value suggested for lactate was 0.125, where any lesion above this value is supposed to be malignant with 90 % sensitivity and 83.3 % specificity. Despite that ADC maps had comparable AUCs still, the statistical measure that had the final say was the interpretation of likelihood ratio. As expected, lactate again showed the best combination of positive and negative likelihood ratios, whereas for the maps, ADC map with 500 and 1000 b-values showed the best realistic combination of likelihood ratios, however, with lower sensitivity and specificity than lactate. Conclusion: Diffusion weighted imaging and magnetic resonance spectroscopy are state-of-art in the diagnostic arena and they manifested themselves as key players in the differentiation process of orofacial tumors. The complete biological profile of malignancy can be decoded as low ADC values, high choline and/or high lactate, whereas that of benign entities can be translated as high ADC values, low choline and no lactate.

Keywords: diffusion magnetic resonance imaging, magnetic resonance spectroscopy, malignant tumors, maxillofacial

Procedia PDF Downloads 160
3717 Impact of Climate on Sugarcane Yield Over Belagavi District, Karnataka Using Statistical Mode

Authors: Girish Chavadappanavar

Abstract:

The impact of climate on agriculture could result in problems with food security and may threaten the livelihood activities upon which much of the population depends. In the present study, the development of a statistical yield forecast model has been carried out for sugarcane production over Belagavi district, Karnataka using weather variables of crop growing season and past observed yield data for the period of 1971 to 2010. The study shows that this type of statistical yield forecast model could efficiently forecast yield 5 weeks and even 10 weeks in advance of the harvest for sugarcane within an acceptable limit of error. The performance of the model in predicting yields at the district level for sugarcane crops is found quite satisfactory for both validation (2007 and 2008) as well as forecasting (2009 and 2010).In addition to the above study, the climate variability of the area has also been studied, and hence, the data series was tested for Mann Kendall Rank Statistical Test. The maximum and minimum temperatures were found to be significant with opposite trends (decreasing trend in maximum and increasing in minimum temperature), while the other three are found in significant with different trends (rainfall and evening time relative humidity with increasing trend and morning time relative humidity with decreasing trend).

Keywords: climate impact, regression analysis, yield and forecast model, sugar models

Procedia PDF Downloads 59
3716 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 300
3715 The Relationship between EFL Learners' Self-Regulation and Willingness to Communicate

Authors: Mania Nosratinia, Zahra Deris

Abstract:

The purpose of the present study was to investigate the relationship between EFL learners' self-regulation (SR) and willingness to communicate (WTC). To this end, 520 male and female EFL learners, ranging between 19 and 34 years old (Mage = 26), majoring in English Translation, English Language Teaching and English Literature at Islamic Azad University, Fars Province, were randomly selected. They were given two questionnaires: Self-Regulation Questionnaire devised by Brown, Miller, and Lawendowski (1999) and Willingness to Communicate Scale devised by McCroskey and Baer (1985). Preliminarily, pertinent analyses were performed on the data to check the assumptions of normality, linearity, and homoscedasticity. Since the assumption of normality was violated, Spearman's rank-order correlation was employed to probe the relationships between SR and WTC. The results indicated a significant and positive correlation between the two variables, ρ = .56, n = 520, p < .05, which signified a large effect size supplemented by a very small confidence interval (0.503 – 0.619). The results of the Kruskal-Wallis tests indicated that there is a statistically significant difference in WTC score between the different levels of SR, χ2(2) = 157.843, p = 0.000 with a mean rank SR score of 128.13 for low-SR level, 286.64 for mid-SR level, and 341.12 for high-SR level. Also, a post-hoc comparison through running a Dwass-Steel-Critchlow-Fligner indicated significant differences among the SR level groups on WTC scores. Given the findings of the study, the obtained results may help EFL teachers, teacher trainers, and material developers to possess a broader perspective towards the TEFL practice and to take practical steps towards the attainments of the desired objectives and effective instruction.

Keywords: EFL learner, self-regulation, willingness to communicate, relationship

Procedia PDF Downloads 314
3714 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations

Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana

Abstract:

Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.

Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS

Procedia PDF Downloads 245
3713 Correlates of Cost Effectiveness Analysis of Rating Scale and Psycho-Productive Multiple Choice Test for Assessing Students' Performance in Rice Production in Secondary Schools in Ebonyi State, Nigeria

Authors: Ogbonnaya Elom, Francis N. Azunku, Ogochukwu Onah

Abstract:

This study was carried out to determine the correlates of cost effectiveness analysis of rating scale and psycho-productive multiple choice test for assessing students’ performance in rice production. Four research questions were developed and answered, while one hypothesis was formulated and tested. Survey and correlation designs were adopted. The population of the study was 20,783 made up of 20,511 senior secondary (SSII) students and 272 teachers of agricultural science from 221 public secondary schools. Two schools with one intact class of 30 students each was purposely selected as sample based on certain criteria. Four sets of instruments were used for data collection. One of the instruments-the rating scale, was subjected to face and content validation while the other three were subjected to face validation only. Cronbach alpha technique was utilized to determine the internal consistency of the rating scale items which yielded a coefficient of 0.82 while the Kudder-Richardson (K-R 20) formula was involved in determining the stability of the psycho-productive multiple choice test items which yielded a coefficient of 0.80. Method of data collection involved a step-by-step approach in collecting data. Data collected were analyzed using percentage, weighted mean and sign test to answer the research questions while the hypothesis was tested using Spearman rank-order of correlation and t-test statistic. Findings of the study revealed among others, that psycho-productive multiple choice test is more effective than rating scale when the former is applied on the two groups of students. It was recommended among others, that the external examination bodies should integrate the use of psycho- productive multiple choice test into their examination policy and direct secondary schools to comply with it.

Keywords: correlates, cost-effectiveness, psycho-productive multiple-choice scale, rating scale

Procedia PDF Downloads 121
3712 Neural Network Modelling for Turkey Railway Load Carrying Demand

Authors: Humeyra Bolakar Tosun

Abstract:

The transport sector has an undisputed place in human life. People need transport access to continuous increase day by day with growing population. The number of rail network, urban transport planning, infrastructure improvements, transportation management and other related areas is a key factor affecting our country made it quite necessary to improve the work of transportation. In this context, it plays an important role in domestic rail freight demand planning. Alternatives that the increase in the transportation field and has made it mandatory requirements such as the demand for improving transport quality. In this study generally is known and used in studies by the definition, rail freight transport, railway line length, population, energy consumption. In this study, Iron Road Load Net Demand was modeled by multiple regression and ANN methods. In this study, model dependent variable (Output) is Iron Road Load Net demand and 6 entries variable was determined. These outcome values extracted from the model using ANN and regression model results. In the regression model, some parameters are considered as determinative parameters, and the coefficients of the determinants give meaningful results. As a result, ANN model has been shown to be more successful than traditional regression model.

Keywords: railway load carrying, neural network, modelling transport, transportation

Procedia PDF Downloads 135
3711 High-Dose-Rate Brachytherapy for Cervical Cancer: The Effect of Total Reference Air Kerma on the Results of Single-Channel and Tri-Channel Applicators

Authors: Hossain A., Miah S., Ray P. K.

Abstract:

Introduction: Single channel and tri-channel applicators are used in the traditional treatment of cervical cancer. Total reference air kerma (TRAK) and treatment outcomes in high-dose-rate brachytherapy for cervical cancer using single-channel and tri-channel applicators were the main objectives of this retrospective study. Material and Methods: Patients in the radiotherapy division who received brachytherapy, chemotherapy, and external radiotherapy (EBRT) using single and tri-channel applicators were the subjects of a retrospective cohort study from 2016 to 2020. All brachytherapy parameters, including TRAK, were calculated in accordance with the international protocol. The Kaplan Meier method was used to analyze survival rates using a log-rank test. Results and Discussions: Based on treatment times of 15.34 (10-20) days and 21.35 (6.5-28) days, the TRAK for the tri-channel applicator was 0.52 cGy.m² and for the single-channel applicator was 0.34 cGy.m². Based on TRAK, the rectum, bladder, and tumor had respective Pearson correlations of 0.082, 0.009, and 0.032. The 1-specificity and sensitivity were 0.70 and 0.30, respectively. At that time, AUC was 0.71. The log-rank test showed that tri-channel applicators had a survival rate of 95% and single-channel applicators had a survival rate of 85% (p=0.565). Conclusions: The relationship between TRAK and treatment duration and Pearson correlation for the tumor, rectum, and bladder suggests that TRAK should be taken into account for the proper operation of single channel and tri-channel applicators.

Keywords: single-channel, tri-channel, high dose rate brachytherapy, cervical cancer

Procedia PDF Downloads 89
3710 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood

Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty

Abstract:

We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.

Keywords: FT-NIR, mechanical properties, pre-processing, PLS

Procedia PDF Downloads 337
3709 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 371
3708 Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach

Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf

Abstract:

This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.

Keywords: Gaussian process regression, ensemble kernels, bayesian optimization, pharmaceutical sales analysis, time series forecasting, data analysis

Procedia PDF Downloads 57
3707 A Meta Regression Analysis to Detect Price Premium Threshold for Eco-Labeled Seafood

Authors: Cristina Giosuè, Federica Biondo, Sergio Vitale

Abstract:

In the last years, the consumers' awareness for environmental concerns has been increasing, and seafood eco-labels are considered as a possible instrument to improve both seafood markets and sustainable fishing management. In this direction, the aim of this study was to carry out a meta-analysis on consumers’ willingness to pay (WTP) for eco-labeled wild seafood, by a meta-regression. Therefore, only papers published on ISI journals were searched on “Web of Knowledge” and “SciVerse Scopus” platforms, using the combinations of the following key words: seafood, ecolabel, eco-label, willingness, WTP and premium. The dataset was built considering: paper’s and survey’s codes, year of publication, first author’s nationality, species’ taxa and family, sample size, survey’s continent and country, data collection (where and how), gender and age of consumers, brand and ΔWTP. From analysis the interest on eco labeled seafood emerged clearly, in particular in developed countries. In general, consumers declared greater willingness to pay than that actually applied for eco-label products, with difference related to taxa and brand.

Keywords: eco label, meta regression, seafood, willingness to pay

Procedia PDF Downloads 112
3706 Effects of China's Urban Form on Urban Carbon Emission

Authors: Lu Lin

Abstract:

Urbanization has reshaped physical environment, energy consumption and carbon emission of the urban area. China is a typical developing country under a rapid urbanization process and is the world largest carbon emission country. This study aims to explore the correlation between urban form and carbon emission caused by urban energy consumption in China. 287 provincial-level and prefecture-level cities are studied in 2000, 2005, and 2010. Compact ratio index, shape index, and fractal dimension index are used to quantify urban form. Geographically weighted regression (GWR) model is employed to explore the relationship between urban form, energy consumption, and related carbon emission. The results show the average compact ratio index decreased from 2000 to 2010 which indicates urban in China sprawled. The average fractal dimension index increases by 3%, indicating the spatial layouts of China's cities were more complicated. The results by the GWR model show that shape index and fractal dimension index had a non-significant relationship with carbon emission by urban energy consumption. However, compact urban form reduced carbon emission. The findings of this study will help policy-makers make sustainable urban planning and reduce urban carbon emission.

Keywords: carbon emission, GWR model, urban energy consumption, urban form

Procedia PDF Downloads 330
3705 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: concrete bridges, deterioration, Markov chains, probability matrix

Procedia PDF Downloads 329
3704 Rejection Sensitivity and Romantic Relationships: A Systematic Review and Meta-Analysis

Authors: Mandira Mishra, Mark Allen

Abstract:

This meta-analysis explored whether rejection sensitivity relates to facets of romantic relationships. A comprehensive literature search identified 60 studies (147 effect sizes; 16,955 participants) that met inclusion criteria. Data were analysed using inverse-variance weighted random effects meta-analysis. Mean effect sizes from 21 meta-analyses provided evidence that more rejection sensitive individuals report lower levels of relationship satisfaction and relationship closeness, lower levels of perceived partner satisfaction, a greater likelihood of intimate partner violence (perpetration and victimization), higher levels of relationship concerns and relationship conflict, and higher levels of jealousy and self-silencing behaviours. There was also some evidence that rejection sensitive individuals are more likely to engage in risky sexual behaviour and are more prone to sexual compulsivity. There was no evidence of publication bias and various levels of heterogeneity in computed averages. Random effects meta-regression identified participant age and sex as important moderators of pooled mean effects. These findings provide a foundation for the theoretical development of rejection sensitivity in romantic relationships and should be of interest to relationship and marriage counsellors and other relationship professionals.

Keywords: intimate partner violence, relationship satisfaction, commitment, sexual orientation, risky sexual behaviour

Procedia PDF Downloads 70
3703 Brain Tumor Segmentation Based on Minimum Spanning Tree

Authors: Simeon Mayala, Ida Herdlevær, Jonas Bull Haugsøen, Shamundeeswari Anandan, Sonia Gavasso, Morten Brun

Abstract:

In this paper, we propose a minimum spanning tree-based method for segmenting brain tumors. The proposed method performs interactive segmentation based on the minimum spanning tree without tuning parameters. The steps involve preprocessing, making a graph, constructing a minimum spanning tree, and a newly implemented way of interactively segmenting the region of interest. In the preprocessing step, a Gaussian filter is applied to 2D images to remove the noise. Then, the pixel neighbor graph is weighted by intensity differences and the corresponding minimum spanning tree is constructed. The image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the minimum spanning tree into two trees. One of these trees represents the region of interest and the other represents the background. Finally, the segmentation given by the two trees is visualized. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The comparison between our results and the standard gold segmentation confirmed the validity of the minimum spanning tree approach. The proposed method is simple to implement and the results indicate that it is accurate and efficient.

Keywords: brain tumor, brain tumor segmentation, minimum spanning tree, segmentation, image processing

Procedia PDF Downloads 111
3702 Financial Liberalization and Allocation of Bank Credit in Malaysia

Authors: Chow Fah Yee, Eu Chye Tan

Abstract:

The main purpose of developing a modern and sophisticated financial system is to mobilize and allocate the country’s resources for productive uses and in the process contribute to economic growth. Financial liberalization introduced in Malaysia in 1978 was said to be a step towards this goal. According to Mc-Kinnon and Shaw, the deregulation of a country’s financial system will create a more efficient and competitive market driven financial sector; with savings being channelled to the most productive users. This paper aims to assess whether financial liberalization resulted in bank credit being allocated to the more productive users, for the case of Malaysia by: firstly, using Chi-square test to if there exists a relationship between financial liberalization and bank lending in Malaysia. Secondly, to analyze on a comparative basis, the share of loans secured by 9 major economic sectors, using data on bank loans from 1975 to 2003. Lastly, present value analysis and rank correlation was used to determine if the recipients of bigger loans are the more efficient users. Chi-square test confirmed the generally observed trend of an increase in bank credit with the adoption of financial liberalization. While the comparative analysis of loans showed that the bulk of credit were allocated to service sectors, consumer loans and property related sectors, at the expense of industry. Results for rank correlation analysis showed that there is no relationship between the more productive users and amount of loans obtained. This implies that the recipients (sectors) that received more loans were not the more efficient sectors.

Keywords: allocation of resources, bank credit, financial liberalization, economics

Procedia PDF Downloads 432
3701 Assessing Spatial Associations of Mortality Patterns in Municipalities of the Czech Republic

Authors: Jitka Rychtarikova

Abstract:

Regional differences in mortality in the Czech Republic (CR) may be moderate from a broader European perspective, but important discrepancies in life expectancy can be found between smaller territorial units. In this study territorial units are based on Administrative Districts of Municipalities with Extended Powers (MEP). This definition came into force January 1, 2003. There are 205 units and the city of Prague. MEP represents the smallest unit for which mortality patterns based on life tables can be investigated and the Czech Statistical Office has been calculating such life tables (every five-years) since 2004. MEP life tables from 2009-2013 for males and females allowed the investigation of three main life cycles with the use of temporary life expectancies between the exact ages of 0 and 35; 35 and 65; and the life expectancy at exact age 65. The results showed regional survival inequalities primarily in adult and older ages. Consequently, only mortality indicators for adult and elderly population were related to census 2011 unlinked data for the same age groups. The most relevant socio-economic factors taken from the census are: having a partner, educational level and unemployment rate. The unemployment rate was measured for adults aged 35-64 completed years. Exploratory spatial data analysis methods were used to detect regional patterns in spatially contiguous units of MEP. The presence of spatial non-stationarity (spatial autocorrelation) of mortality levels for male and female adults (35-64), and elderly males and females (65+) was tested using global Moran’s I. Spatial autocorrelation of mortality patterns was mapped using local Moran’s I with the intention to depict clusters of low or high mortality and spatial outliers for two age groups (35-64 and 65+). The highest Moran’s I was observed for male temporary life expectancy between exact ages 35 and 65 (0.52) and the lowest was among women with life expectancy of 65 (0.26). Generally, men showed stronger spatial autocorrelation compared to women. The relationship between mortality indicators such as life expectancies and socio-economic factors like the percentage of males/females having a partner; percentage of males/females with at least higher secondary education; and percentage of unemployed males/females from economically active population aged 35-64 years, was evaluated using multiple regression (OLS). The results were then compared to outputs from geographically weighted regression (GWR). In the Czech Republic, there are two broader territories North-West Bohemia (NWB) and North Moravia (NM), in which excess mortality is well established. Results of the t-test of spatial regression showed that for males aged 30-64 the association between mortality and unemployment (when adjusted for education and partnership) was stronger in NM compared to NWB, while educational level impacted the length of survival more in NWB. Geographic variation and relationships in mortality of the CR MEP will also be tested using the spatial Durbin approach. The calculations were conducted by means of ArcGIS 10.6 and SAS 9.4.

Keywords: Czech Republic, mortality, municipality, socio-economic factors, spatial analysis

Procedia PDF Downloads 110
3700 Identification and Control the Yaw Motion Dynamics of Open Frame Underwater Vehicle

Authors: Mirza Mohibulla Baig, Imil Hamda Imran, Tri Bagus Susilo, Sami El Ferik

Abstract:

The paper deals with system identification and control a nonlinear model of semi-autonomous underwater vehicle (UUV). The input-output data is first generated using the experimental values of the model parameters and then this data is used to compute the estimated parameter values. In this study, we use the semi-autonomous UUV LAURS model, which is developed by the Sensors and Actuators Laboratory in University of Sao Paolo. We applied three methods to identify the parameters: integral method, which is a classical least square method, recursive least square, and weighted recursive least square. In this paper, we also apply three different inputs (step input, sine wave input and random input) to each identification method. After the identification stage, we investigate the control performance of yaw motion of nonlinear semi-autonomous Unmanned Underwater Vehicle (UUV) using feedback linearization-based controller. In addition, we compare the performance of the control with an integral and a non-integral part along with state feedback. Finally, disturbance rejection and resilience of the controller is tested. The results demonstrate the ability of the system to recover from such fault.

Keywords: system identification, underwater vehicle, integral method, recursive least square, weighted recursive least square, feedback linearization, integral error

Procedia PDF Downloads 524
3699 Comparative Diagnostic Performance of Diffusion-Weighted Imaging Combined With Microcalcifications on Mammography for Discriminating Malignant From Benign Bi-rads 4 Lesions With the Kaiser Score

Authors: Wangxu Xia

Abstract:

BACKGROUND BI-RADS 4 lesions raise the possibility of malignancy that warrant further clinical and radiologic work-up. This study aimed to evaluate the predictive performance of diffusion-weighted imaging(DWI) and microcalcifications on mammography for predicting malignancy of BI-RADS 4 lesions. In addition, the predictive performance of DWI combined with microcalcifications was alsocompared with the Kaiser score. METHODS During January 2021 and June 2023, 144 patients with 178 BI-RADS 4 lesions underwent conventional MRI, DWI, and mammography were included. The lesions were dichotomized intobenign or malignant according to the pathological results from core needle biopsy or surgical mastectomy. DWI was performed with a b value of 0 and 800s/mm2 and analyzed using theapparent diffusion coefficient, and a Kaiser score > 4 was considered to suggest malignancy. Thediagnostic performances for various diagnostic tests were evaluated with the receiver-operatingcharacteristic (ROC) curve. RESULTS The area under the curve (AUC) for DWI was significantly higher than that of the of mammography (0.86 vs 0.71, P<0.001), but was comparable with that of the Kaiser score (0.86 vs 0.84, P=0.58). However, the AUC for DWI combined with mammography was significantly highthan that of the Kaiser score (0.93 vs 0.84, P=0.007). The sensitivity for discriminating malignant from benign BI-RADS 4 lesions was highest at 89% for Kaiser score, but the highest specificity of 83% can be achieved with DWI combined with mammography. CONCLUSION DWI combined with microcalcifications on mammography could discriminate malignant BI-RADS4 lesions from benign ones with a high AUC and specificity. However, Kaiser score had a better sensitivity for discrimination.

Keywords: MRI, DWI, mammography, breast disease

Procedia PDF Downloads 47
3698 A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations

Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz

Abstract:

Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.

Keywords: poverty line, risk of poverty, auxiliary variable, ratio method

Procedia PDF Downloads 444
3697 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 545
3696 Analysis of Basic Science Curriculum as Correlates of Secondary School Students' Achievement in Science Test in Oyo State

Authors: Olubiyi Johnson Ezekiel

Abstract:

Basic science curriculum is an on-going effort towards developing the potential of manner to produce individuals in a holistic and integrated person, who are intellectually, spiritually, emotionally and physically balanced and harmonious. The main focus of this study is to determine the relationship between students’ achievement in junior school certificate examination (JSCE) and senior school basic science achievement test (SSBSAT) on the basis of all the components of basic science. The study employed the descriptive research of the survey type and utilized junior school certificate examination and senior school basic science achievement test(r = .87) scores as instruments. The data collected were subjected to Pearson product moment correlation, Spearman rank correlation, regression analysis and analysis of variance. The result of the finding revealed that the mean effects of the achievement in all the components of basic science on SSBSAT are significantly different from zero. Based on the results of the findings, it was concluded that the relationship between students’ achievement in JSCE and SSBSAT was weak and to achieve a unit increase in the students’ achievement in the SSBSAT when other subjects are held constant, we have to increase the learning of: -physics by 0.081 units; -chemistry by 0.072 units; -biology by 0.025 units and general knowledge by 0.097 units. It was recommended among others, that general knowledge aspect of basic science should be included in either physics or chemistry aspect of basic science.

Keywords: basic science curriculum, students’ achievement, science test, secondary school students

Procedia PDF Downloads 434
3695 An Overbooking Model for Car Rental Service with Different Types of Cars

Authors: Naragain Phumchusri, Kittitach Pongpairoj

Abstract:

Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.

Keywords: overbooking, car rental industry, revenue management, stochastic model

Procedia PDF Downloads 158