Search results for: regression coefficient
1096 Mathematical Modeling of Wind Energy System for Designing Fault Tolerant Control
Authors: Patil Ashwini, Archana Thosar
Abstract:
This paper addresses the mathematical model of wind energy system useful for designing fault tolerant control. To serve the demand of power, large capacity wind energy systems are vital. These systems are installed offshore where non planned service is very costly. Whenever there is a fault in between two planned services, the system may stop working abruptly. This might even lead to the complete failure of the system. To enhance the reliability, the availability and reduce the cost of maintenance of wind turbines, the fault tolerant control systems are very essential. For designing any control system, an appropriate mathematical model is always needed. In this paper, the two-mass model is modified by considering the frequent mechanical faults like misalignments in the drive train, gears and bearings faults. These faults are subject to a wear process and cause frictional losses. This paper addresses these faults in the mathematics of the wind energy system. Further, the work is extended to study the variations of the parameters namely generator inertia constant, spring constant, viscous friction coefficient and gear ratio; on the pole-zero plot which is related with the physical design of the wind turbine. Behavior of the wind turbine during drive train faults are simulated and briefly discussed.
Keywords: Mathematical model of wind energy system, stability analysis, shaft stiffness, viscous friction coefficient, gear ratio, generator inertia, fault tolerant control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19041095 Energy Efficient Construction and the Seismic Resistance of Passive Houses
Authors: Vojko Kilar, Boris Azinović, David Koren
Abstract:
Recently, an increasing trend of passive and low-energy buildings transferring form non earthquake-prone to earthquake-prone regions has thrown out the question about the seismic safety of such buildings. The paper describes the most commonly used thermal insulating materials and the special details, which could be critical from the point of view of earthquake resistance. The most critical appeared to be the cases of buildings founded on the RC foundation slab lying on a thermal insulation (TI) layer made of extruded polystyrene (XPS). It was pointed out that in such cases the seismic response of such buildings might differ to response of their fixed based counterparts. The main parameters that need special designers’ attention are: the building’s lateral top displacement, the ductility demand of the superstructure, the foundation friction coefficient demand, the maximum compressive stress in the TI layer and the percentage of the uplifted foundation. The analyses have shown that the potentially negative influences of inserting the TI under the foundation slab could be expected only for slender high-rise buildings subjected to severe earthquakes. Oppositely it was demonstrated for the foundation friction coefficient demand which could exceed the capacity value yet in the case of low-rise buildings subjected to moderate earthquakes. Some suggestions to prevent the horizontal shifts are also given.
Keywords: Earthquake Response, Extruded Polystyrene (XPS), Low-Energy Buildings, Foundations on Thermal Insulation Layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29981094 Monitoring Blood Pressure Using Regression Techniques
Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim
Abstract:
Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.
Keywords: Blood pressure, noninvasive optical system, PCA, continuous monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6871093 EZW Coding System with Artificial Neural Networks
Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar
Abstract:
Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19331092 Space-Time Variation in Rainfall and Runoff: Upper Betwa Catchment
Authors: Ritu Ahlawat
Abstract:
Among all geo-hydrological relationships, rainfallrunoff relationship is of utmost importance in any hydrological investigation and water resource planning. Spatial variation, lag time involved in obtaining areal estimates for the basin as a whole can affect the parameterization in design stage as well as in planning stage. In conventional hydrological processing of data, spatial aspect is either ignored or interpolated at sub-basin level. Temporal variation when analysed for different stages can provide clues for its spatial effectiveness. The interplay of space-time variation at pixel level can provide better understanding of basin parameters. Sustenance of design structures for different return periods and their spatial auto-correlations should be studied at different geographical scales for better management and planning of water resources. In order to understand the relative effect of spatio-temporal variation in hydrological data network, a detailed geo-hydrological analysis of Betwa river catchment falling in Lower Yamuna Basin is presented in this paper. Moreover, the exact estimates about the availability of water in the Betwa river catchment, especially in the wake of recent Betwa-Ken linkage project, need thorough scientific investigation for better planning. Therefore, an attempt in this direction is made here to analyse the existing hydrological and meteorological data with the help of SPSS, GIS and MS-EXCEL software. A comparison of spatial and temporal correlations at subcatchment level in case of upper Betwa reaches has been made to demonstrate the representativeness of rain gauges. First, flows at different locations are used to derive correlation and regression coefficients. Then, long-term normal water yield estimates based on pixel-wise regression coefficients of rainfall-runoff relationship have been mapped. The areal values obtained from these maps can definitely improve upon estimates based on point-based extrapolations or areal interpolations.Keywords: Catchment's runoff estimates, influence area regional regression coefficients, runoff yield series,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21001091 The Influences of Accountants’ Potential Performance on Their Working Process: Government Savings Bank, Northeast, Thailand
Authors: Prateep Wajeetongratana
Abstract:
The purpose of this research was to study the influence of accountants’ potential performance on their working process, a case study of Government Savings Banks in the northeast of Thailand. The independent variables included accounting knowledge, accounting skill, accounting value, accounting ethics, and accounting attitude, while the dependent variable included the success of the working process. A total of 155 accountants working for Government Savings Banks were selected by random sampling. A questionnaire was used as a tool for collecting data. Descriptive statistics in this research included percentage, mean, and multiple regression analyses.
The findings revealed that the majority of accountants were female with an age between 35-40 years old. Most of the respondents had an undergraduate degree with ten years of experience. Moreover, the factors of accounting knowledge, accounting skill, accounting a value and accounting ethics and accounting attitude were rated at a high level. The findings from regression analysis of observation data revealed a causal relationship in that the observation data could explain at least 51 percent of the success in the accountants’ working process.
Keywords: Influence, Potential Performance, Success, Working Process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13931090 Comparison of Machine Learning Techniques for Single Imputation on Audiograms
Authors: Sarah Beaver, Renee Bryce
Abstract:
Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125 Hz to 8000 Hz. The data contain patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R2 values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R2 values for the best models for KNN ranges from .89 to .95. The best imputation models received R2 between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our imputation models versus constant imputations by a two percent increase.
Keywords: Machine Learning, audiograms, data imputations, single imputations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601089 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts
Authors: S. Karabulut, A. Güllü, A. Güldas, R. Gürbüz
Abstract:
This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19691088 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening
Authors: Chun-Lang Chang
Abstract:
According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.
Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18011087 Deterioration Assessment Models for Water Pipelines
Authors: L. Parvizsedghy, I. Gkountis, A. Senouci, T. Zayed, M. Alsharqawi, H. El Chanati, M. El-Abbasy, F. Mosleh
Abstract:
The aging and deterioration of water pipelines in cities worldwide result in more frequent water main breaks, water service disruptions, and flooding damage. Therefore, there is an urgent need for undertaking proper maintenance procedures to avoid breaks and disastrous failures. However, due to budget limitations, the maintenance of water pipeline networks needs to be prioritized through efficient deterioration assessment models. Previous studies focused on the development of structural or physical deterioration assessment models, which require expensive inspection data. But, this paper aims at developing deterioration assessment models for water pipelines using statistical techniques. Several deterioration models were developed based on pipeline size, material type, and soil type using linear regression analysis. The categorical nature of some variables affecting pipeline deterioration was considered through developing several categorical models. The developed models were validated with an average validity percentage greater than 95%. Moreover, sensitivity analysis was carried out against different classifications and it displayed higher importance of age of pipes compared to other factors. The developed models will be helpful for the water municipalities and asset managers to assess the condition of their pipes and prioritize them for maintenance and inspection purposes.
Keywords: Water pipelines, deterioration assessment models, regression analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11991086 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients Cohorts: A Case Study in Scotland
Authors: Sotirios Raptis
Abstract:
Health and Social care (HSc) services planning and scheduling are facing unprecedented challenges, due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven approaches can help to improve policies, plan and design services provision schedules using algorithms that assist healthcare managers to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as Classification and Regression Trees (CART), Random Forests (RF), and Logistic Regression (LGR). The significance tests Chi-Squared and Student’s test are used on data over a 39 years span for which data exist for services delivered in Scotland. The demands are associated using probabilities and are parts of statistical hypotheses. These hypotheses, as their NULL part, assume that the target demand is statistically dependent on other services’ demands. This linking is checked using the data. In addition, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus, groups of services. Statistical tests confirmed ML coupling and made the prediction statistically meaningful and proved that a target service can be matched reliably to other services while ML showed that such marked relationships can also be linear ones. Zero padding was used for missing years records and illustrated better such relationships both for limited years and for the entire span offering long-term data visualizations while limited years periods explained how well patients numbers can be related in short periods of time or that they can change over time as opposed to behaviours across more years. The prediction performance of the associations were measured using metrics such as Receiver Operating Characteristic (ROC), Area Under Curve (AUC) and Accuracy (ACC) as well as the statistical tests Chi-Squared and Student. Co-plots and comparison tables for the RF, CART, and LGR methods as well as the p-value from tests and Information Exchange (IE/MIE) measures are provided showing the relative performance of ML methods and of the statistical tests as well as the behaviour using different learning ratios. The impact of k-neighbours classification (k-NN), Cross-Correlation (CC) and C-Means (CM) first groupings was also studied over limited years and for the entire span. It was found that CART was generally behind RF and LGR but in some interesting cases, LGR reached an AUC = 0 falling below CART, while the ACC was as high as 0.912 showing that ML methods can be confused by zero-padding or by data’s irregularities or by the outliers. On average, 3 linear predictors were sufficient, LGR was found competing well RF and CART followed with the same performance at higher learning ratios. Services were packed only when a significance level (p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, low birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited duration, across various services sectors, learning configurations, as confirmed by using statistical hypotheses.
Keywords: Class, cohorts, data frames, grouping, prediction, probabilities, services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4601085 English Language Learning Strategies Used by University Students: A Case Study of English and Business English Major at Suan Sunandha Rajabhat in Bangkok
Authors: Pranee Pathomchaiwat
Abstract:
The purposes of this research are 1) to study English language learning strategies used by the fourth-year students majoring in English and Business English, 2) to study the English language learning strategies which have an affect on English learning achievement, and 3) to compare the English language learning strategies used by the students majoring in English and Business English. The population and sampling comprise of 139 university students of the Suan Sunandha Rajabhat University. Research instruments are language learning strategies questionnaire which was constructed by the researcher and improved on by three experts and the transcripts that show the results of English learning achievement. The questionnaire includes 1) Language Practice Strategy 2)Memory Strategy 3) Communication Strategy 4)Making an Intelligent Guess or Compensation Strategy 5) Self-discipline in Learning Management Strategy 6) Affective Strategy 7)Self-Monitoring Strategy 8) Self-studySkill Strategy. Statistics used in the study are mean, standard deviation, T-test and One Way ANOVA, Pearson product moment correlation coefficient and Regression Analysis. The results of the findings reveal that the English language learning strategies most frequently used by the students are affective strategy, making an intelligent guess or compensation strategy, self-studyskill strategy and self-monitoring strategy respectively. The aspect of making an intelligent guess or compensation strategy had the most significant affect on English learning achievement. It is found that the English language learning strategies mostly used by the Business English major students and moderately used by the English major students. Their language practice strategies uses were significantly different at the 0.05 level and their communication strategies uses were significantly different at the 0.01 level. In addition, it is found that the poor students and the fair ones most frequently used affective strategy while the good ones most frequently used making an intelligent guess or compensation strategy. KeywordsEnglish language, language learning strategies, English learning achievement, and students majoring in English, Business English. Pranee Pathomchaiwat is an Assistant Professor in Business English Program, Suan Sunandha Rajabhat University, Bangkok, Thailand (e-mail: [email protected]).Keywords: English language, language learning strategies, English learning achievement, students majoring in English, Business English
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38211084 Prediction of Watermelon Consumer Acceptability based on Vibration Response Spectrum
Authors: R.Abbaszadeh, A.Rajabipour, M.Delshad, M.J.Mahjub, H.Ahmadi
Abstract:
It is difficult to judge ripeness by outward characteristics such as size or external color. In this paper a nondestructive method was studied to determine watermelon (Crimson Sweet) quality. Responses of samples to excitation vibrations were detected using laser Doppler vibrometry (LDV) technology. Phase shift between input and output vibrations were extracted overall frequency range. First and second were derived using frequency response spectrums. After nondestructive tests, watermelons were sensory evaluated. So the samples were graded in a range of ripeness based on overall acceptability (total desired traits consumers). Regression models were developed to predict quality using obtained results and sample mass. The determination coefficients of the calibration and cross validation models were 0.89 and 0.71 respectively. This study demonstrated feasibility of information which is derived vibration response curves for predicting fruit quality. The vibration response of watermelon using the LDV method is measured without direct contact; it is accurate and timely, which could result in significant advantage for classifying watermelons based on consumer opinions.Keywords: Laser Doppler vibrometry, Phase shift, Overallacceptability, Regression model , Resonance frequency, Watermelon
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27131083 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16181082 Using Artificial Neural Network to Predict Collisions on Horizontal Tangents of 3D Two-Lane Highways
Authors: Omer F. Cansiz, Said M. Easa
Abstract:
The purpose of this study is mainly to predict collision frequency on the horizontal tangents combined with vertical curves using artificial neural network methods. The proposed ANN models are compared with existing regression models. First, the variables that affect collision frequency were investigated. It was found that only the annual average daily traffic, section length, access density, the rate of vertical curvature, smaller curve radius before and after the tangent were statistically significant according to related combinations. Second, three statistical models (negative binomial, zero inflated Poisson and zero inflated negative binomial) were developed using the significant variables for three alignment combinations. Third, ANN models are developed by applying the same variables for each combination. The results clearly show that the ANN models have the lowest mean square error value than those of the statistical models. Similarly, the AIC values of the ANN models are smaller to those of the regression models for all the combinations. Consequently, the ANN models have better statistical performances than statistical models for estimating collision frequency. The ANN models presented in this paper are recommended for evaluating the safety impacts 3D alignment elements on horizontal tangents.Keywords: Collision frequency, horizontal tangent, 3D two-lane highway, negative binomial, zero inflated Poisson, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16361081 Graphic Analysis of Genotype by Environment Interaction for Maize Hybrid Yield Using Site Regression Stability Model
Authors: Saeed Safari Dolatabad, Rajab Choukan
Abstract:
Selection of maize (Zea mays) hybrids with wide adaptability across diverse farming environments is important, prior to recommending them to achieve a high rate of hybrid adoption. Grain yield of 14 maize hybrids, tested in a randomized completeblock design with four replicates across 22 environments in Iran, was analyzed using site regression (SREG) stability model. The biplot technique facilitates a visual evaluation of superior genotypes, which is useful for cultivar recommendation and mega-environment identification. The objectives of this study were (i) identification of suitable hybrids with both high mean performance and high stability (ii) to determine mega-environments for maize production in Iran. Biplot analysis identifies two mega-environments in this study. The first mega-environments included KRM, KSH, MGN, DZF A, KRJ, DRB, DZF B, SHZ B, and KHM, where G10 hybrid was the best performing hybrid. The second mega-environment included ESF B, ESF A, and SHZ A, where G4 hybrid was the best hybrid. According to the ideal-hybrid biplot, G10 hybrid was better than all other hybrids, followed by the G1 and G3 hybrids. These hybrids were identified as best hybrids that have high grain yield and high yield stability. GGE biplot analysis provided a framework for identifying the target testing locations that discriminates genotypes that are high yielding and stable.
Keywords: Zea mays L, GGE biplot, Multi-environment trials, Yield stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16791080 Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches
Authors: H. Bonakdari, I. Ebtehaj
Abstract:
The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.
Keywords: Adaptive neuro-fuzzy inference system, ANFIS, artificial neural network, ANN, bridge pier, scour depth, nonlinear regression, NLR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9301079 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis
Authors: Mussa Mahmoudi
Abstract:
For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28881078 Analytical Authentication of Butter Using Fourier Transform Infrared Spectroscopy Coupled with Chemometrics
Authors: M. Bodner, M. Scampicchio
Abstract:
Fourier Transform Infrared (FT-IR) spectroscopy coupled with chemometrics was used to distinguish between butter samples and non-butter samples. Further, quantification of the content of margarine in adulterated butter samples was investigated. Fingerprinting region (1400-800 cm–1) was used to develop unsupervised pattern recognition (Principal Component Analysis, PCA), supervised modeling (Soft Independent Modelling by Class Analogy, SIMCA), classification (Partial Least Squares Discriminant Analysis, PLS-DA) and regression (Partial Least Squares Regression, PLS-R) models. PCA of the fingerprinting region shows a clustering of the two sample types. All samples were classified in their rightful class by SIMCA approach; however, nine adulterated samples (between 1% and 30% w/w of margarine) were classified as belonging both at the butter class and at the non-butter one. In the two-class PLS-DA model’s (R2 = 0.73, RMSEP, Root Mean Square Error of Prediction = 0.26% w/w) sensitivity was 71.4% and Positive Predictive Value (PPV) 100%. Its threshold was calculated at 7% w/w of margarine in adulterated butter samples. Finally, PLS-R model (R2 = 0.84, RMSEP = 16.54%) was developed. PLS-DA was a suitable classification tool and PLS-R a proper quantification approach. Results demonstrate that FT-IR spectroscopy combined with PLS-R can be used as a rapid, simple and safe method to identify pure butter samples from adulterated ones and to determine the grade of adulteration of margarine in butter samples.
Keywords: Adulterated butter, margarine, PCA, PLS-DA, PLS-R, SIMCA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7811077 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.
Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881076 The Relationship between Inventory Management and Profitability: A Comparative Research on Turkish Firms Operated in Weaving Industry, Eatables Industry, Wholesale and Retail Industry
Authors: G. Sekeroglu, M. Altan
Abstract:
Working capital is identified as firm’s all current assets. Inventories which are one of the working capital elements are very important among current assets for firms. Because, profitability is an indicator for firms’ financial success is provided with minimum cost and optimum inventory quantity. So in this study, it is investigated as comparatively that the effect of inventory management on the profitability of Turkish firms which operated in weaving industry, eatables industry, wholesale and retail industry in between 2003 – 2012 years. Research data consist of profitability ratios and inventory turnovers ratio calculated by using balance sheets and income statements of firms which operated in Borsa Istanbul (BIST). In this research, the relationship between inventories and profitability is investigated by using SPSS-20 software with regression and correlation analysis. The results achieved from three industry departments which exist in study interpreted as comparatively. Accordingly, it is determined that there is a positive relationship between inventory management and profitability in eatables industry. However, it was founded that there is no relationship between inventory management and profitability in weaving industry and wholesale and retail industry.
Keywords: Profitability, regression analysis, inventory management, working capital.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72151075 Blood Glucose Level Measurement from Breath Analysis
Authors: Tayyab Hassan, Talha Rehman, Qasim Abdul Aziz, Ahmad Salman
Abstract:
The constant monitoring of blood glucose level is necessary for maintaining health of patients and to alert medical specialists to take preemptive measures before the onset of any complication as a result of diabetes. The current clinical monitoring of blood glucose uses invasive methods repeatedly which are uncomfortable and may result in infections in diabetic patients. Several attempts have been made to develop non-invasive techniques for blood glucose measurement. In this regard, the existing methods are not reliable and are less accurate. Other approaches claiming high accuracy have not been tested on extended dataset, and thus, results are not statistically significant. It is a well-known fact that acetone concentration in breath has a direct relation with blood glucose level. In this paper, we have developed the first of its kind, reliable and high accuracy breath analyzer for non-invasive blood glucose measurement. The acetone concentration in breath was measured using MQ 138 sensor in the samples collected from local hospitals in Pakistan involving one hundred patients. The blood glucose levels of these patients are determined using conventional invasive clinical method. We propose a linear regression classifier that is trained to map breath acetone level to the collected blood glucose level achieving high accuracy.
Keywords: Blood glucose level, breath acetone concentration, diabetes, linear regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15511074 Reducing Pressure Drop in Microscale Channel Using Constructal Theory
Authors: K. X. Cheng, A. L. Goh, K. T. Ooi
Abstract:
The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.Keywords: Constructal theory, enhanced heat transfer, microchannel, pressure drop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14921073 Studying the Effects of Economic and Financial Development as well as Institutional Quality on Environmental Destruction in the Upper-Middle Income Countries
Authors: Morteza Raei Dehaghi, Seyed Mohammad Mirhashemi
Abstract:
The current study explored the effect of economic development, financial development and institutional quality on environmental destruction in upper-middle income countries during the time period of 1999-2011. The dependent variable is logarithm of carbon dioxide emissions that can be considered as an index for destruction or quality of the environment given to its effects on the environment. Financial development and institutional development variables as well as some control variables were considered. In order to study cross-sectional correlation among the countries under study, Pesaran and Friz test was used. Since the results of both tests show cross-sectional correlation in the countries under study, seemingly unrelated regression method was utilized for model estimation. The results disclosed that Kuznets’ environmental curve hypothesis is confirmed in upper-middle income countries and also, financial development and institutional quality have a significant effect on environmental quality. The results of this study can be considered by policy makers in countries with different income groups to have access to a growth accompanied by improved environmental quality.
Keywords: Economic Development, Environmental Destruction, Financial Development, Institutional Development, Seemingly Unrelated Regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19491072 Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis
Authors: Farhad Kolahan, A. Hamid Khajavi
Abstract:
Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.
Keywords: AWJ cutting, Mathematical modeling, Simulated Annealing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21541071 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination
Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan
Abstract:
The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4711070 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.
Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11591069 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.
Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17901068 New Classes of Salagean type Meromorphic Harmonic Functions
Authors: Hakan Bostancı, Metin Öztürk
Abstract:
In this paper, a necessary and sufficient coefficient are given for functions in a class of complex valued meromorphic harmonic univalent functions of the form f = h + g using Salagean operator. Furthermore, distortion theorems, extreme points, convolution condition and convex combinations for this family of meromorphic harmonic functions are obtained.
Keywords: Harmonic mappings, Meromorphic functions, Salagean operator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13041067 Hybrid Rocket Motor Performance Parameters: Theoretical and Experimental Evaluation
Authors: A. El-S. Makled, M. K. Al-Tamimi
Abstract:
A mathematical model to predict the performance parameters (thrusts, chamber pressures, fuel mass flow rates, mixture ratios, and regression rates during firing time) of hybrid rocket motor (HRM) is evaluated. The internal ballistic (IB) hybrid combustion model assumes that the solid fuel surface regression rate is controlled only by heat transfer (convective and radiative) from flame zone to solid fuel burning surface. A laboratory HRM is designed, manufactured, and tested for low thrust profile space missions (10-15 N) and for validating the mathematical model (computer program). The polymer material and gaseous oxidizer which are selected for this experimental work are polymethyle-methacrylate (PMMA) and polyethylene (PE) as solid fuel grain and gaseous oxygen (GO2) as oxidizer. The variation of various operational parameters with time is determined systematically and experimentally in firing of up to 20 seconds, and an average combustion efficiency of 95% of theory is achieved, which was the goal of these experiments. The comparison between recording fire data and predicting analytical parameters shows good agreement with the error that does not exceed 4.5% during all firing time. The current mathematical (computer) code can be used as a powerful tool for HRM analytical design parameters.Keywords: Hybrid combustion, internal ballistics, hybrid rocket motor, performance parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770