Search results for: regression coefficient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1616

Search results for: regression coefficient

1076 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening

Authors: Chun-Lang Chang

Abstract:

According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.

Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
1075 Deterioration Assessment Models for Water Pipelines

Authors: L. Parvizsedghy, I. Gkountis, A. Senouci, T. Zayed, M. Alsharqawi, H. El Chanati, M. El-Abbasy, F. Mosleh

Abstract:

The aging and deterioration of water pipelines in cities worldwide result in more frequent water main breaks, water service disruptions, and flooding damage. Therefore, there is an urgent need for undertaking proper maintenance procedures to avoid breaks and disastrous failures. However, due to budget limitations, the maintenance of water pipeline networks needs to be prioritized through efficient deterioration assessment models. Previous studies focused on the development of structural or physical deterioration assessment models, which require expensive inspection data. But, this paper aims at developing deterioration assessment models for water pipelines using statistical techniques. Several deterioration models were developed based on pipeline size, material type, and soil type using linear regression analysis. The categorical nature of some variables affecting pipeline deterioration was considered through developing several categorical models. The developed models were validated with an average validity percentage greater than 95%. Moreover, sensitivity analysis was carried out against different classifications and it displayed higher importance of age of pipes compared to other factors. The developed models will be helpful for the water municipalities and asset managers to assess the condition of their pipes and prioritize them for maintenance and inspection purposes.

Keywords: Water pipelines, deterioration assessment models, regression analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1165
1074 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
1073 Prediction of Watermelon Consumer Acceptability based on Vibration Response Spectrum

Authors: R.Abbaszadeh, A.Rajabipour, M.Delshad, M.J.Mahjub, H.Ahmadi

Abstract:

It is difficult to judge ripeness by outward characteristics such as size or external color. In this paper a nondestructive method was studied to determine watermelon (Crimson Sweet) quality. Responses of samples to excitation vibrations were detected using laser Doppler vibrometry (LDV) technology. Phase shift between input and output vibrations were extracted overall frequency range. First and second were derived using frequency response spectrums. After nondestructive tests, watermelons were sensory evaluated. So the samples were graded in a range of ripeness based on overall acceptability (total desired traits consumers). Regression models were developed to predict quality using obtained results and sample mass. The determination coefficients of the calibration and cross validation models were 0.89 and 0.71 respectively. This study demonstrated feasibility of information which is derived vibration response curves for predicting fruit quality. The vibration response of watermelon using the LDV method is measured without direct contact; it is accurate and timely, which could result in significant advantage for classifying watermelons based on consumer opinions.

Keywords: Laser Doppler vibrometry, Phase shift, Overallacceptability, Regression model , Resonance frequency, Watermelon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2678
1072 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients Cohorts: A Case Study in Scotland

Authors: Sotirios Raptis

Abstract:

Health and Social care (HSc) services planning and scheduling are facing unprecedented challenges, due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven approaches can help to improve policies, plan and design services provision schedules using algorithms that assist healthcare managers to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as Classification and Regression Trees (CART), Random Forests (RF), and Logistic Regression (LGR). The significance tests Chi-Squared and Student’s test are used on data over a 39 years span for which data exist for services delivered in Scotland. The demands are associated using probabilities and are parts of statistical hypotheses. These hypotheses, as their NULL part, assume that the target demand is statistically dependent on other services’ demands. This linking is checked using the data. In addition, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus, groups of services. Statistical tests confirmed ML coupling and made the prediction statistically meaningful and proved that a target service can be matched reliably to other services while ML showed that such marked relationships can also be linear ones. Zero padding was used for missing years records and illustrated better such relationships both for limited years and for the entire span offering long-term data visualizations while limited years periods explained how well patients numbers can be related in short periods of time or that they can change over time as opposed to behaviours across more years. The prediction performance of the associations were measured using metrics such as Receiver Operating Characteristic (ROC), Area Under Curve (AUC) and Accuracy (ACC) as well as the statistical tests Chi-Squared and Student. Co-plots and comparison tables for the RF, CART, and LGR methods as well as the p-value from tests and Information Exchange (IE/MIE) measures are provided showing the relative performance of ML methods and of the statistical tests as well as the behaviour using different learning ratios. The impact of k-neighbours classification (k-NN), Cross-Correlation (CC) and C-Means (CM) first groupings was also studied over limited years and for the entire span. It was found that CART was generally behind RF and LGR but in some interesting cases, LGR reached an AUC = 0 falling below CART, while the ACC was as high as 0.912 showing that ML methods can be confused by zero-padding or by data’s irregularities or by the outliers. On average, 3 linear predictors were sufficient, LGR was found competing well RF and CART followed with the same performance at higher learning ratios. Services were packed only when a significance level (p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, low birth weights, alcoholism, drug abuse, and emergency admissions. The work found  that different HSc services can be well packed as plans of limited duration, across various services sectors, learning configurations, as confirmed by using statistical hypotheses.

Keywords: Class, cohorts, data frames, grouping, prediction, probabilities, services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 410
1071 English Language Learning Strategies Used by University Students: A Case Study of English and Business English Major at Suan Sunandha Rajabhat in Bangkok

Authors: Pranee Pathomchaiwat

Abstract:

The purposes of this research are 1) to study English language learning strategies used by the fourth-year students majoring in English and Business English, 2) to study the English language learning strategies which have an affect on English learning achievement, and 3) to compare the English language learning strategies used by the students majoring in English and Business English. The population and sampling comprise of 139 university students of the Suan Sunandha Rajabhat University. Research instruments are language learning strategies questionnaire which was constructed by the researcher and improved on by three experts and the transcripts that show the results of English learning achievement. The questionnaire includes 1) Language Practice Strategy 2)Memory Strategy 3) Communication Strategy 4)Making an Intelligent Guess or Compensation Strategy 5) Self-discipline in Learning Management Strategy 6) Affective Strategy 7)Self-Monitoring Strategy 8) Self-studySkill Strategy. Statistics used in the study are mean, standard deviation, T-test and One Way ANOVA, Pearson product moment correlation coefficient and Regression Analysis. The results of the findings reveal that the English language learning strategies most frequently used by the students are affective strategy, making an intelligent guess or compensation strategy, self-studyskill strategy and self-monitoring strategy respectively. The aspect of making an intelligent guess or compensation strategy had the most significant affect on English learning achievement. It is found that the English language learning strategies mostly used by the Business English major students and moderately used by the English major students. Their language practice strategies uses were significantly different at the 0.05 level and their communication strategies uses were significantly different at the 0.01 level. In addition, it is found that the poor students and the fair ones most frequently used affective strategy while the good ones most frequently used making an intelligent guess or compensation strategy. KeywordsEnglish language, language learning strategies, English learning achievement, and students majoring in English, Business English. Pranee Pathomchaiwat is an Assistant Professor in Business English Program, Suan Sunandha Rajabhat University, Bangkok, Thailand (e-mail: [email protected]).

Keywords: English language, language learning strategies, English learning achievement, students majoring in English, Business English

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3787
1070 Using Artificial Neural Network to Predict Collisions on Horizontal Tangents of 3D Two-Lane Highways

Authors: Omer F. Cansiz, Said M. Easa

Abstract:

The purpose of this study is mainly to predict collision frequency on the horizontal tangents combined with vertical curves using artificial neural network methods. The proposed ANN models are compared with existing regression models. First, the variables that affect collision frequency were investigated. It was found that only the annual average daily traffic, section length, access density, the rate of vertical curvature, smaller curve radius before and after the tangent were statistically significant according to related combinations. Second, three statistical models (negative binomial, zero inflated Poisson and zero inflated negative binomial) were developed using the significant variables for three alignment combinations. Third, ANN models are developed by applying the same variables for each combination. The results clearly show that the ANN models have the lowest mean square error value than those of the statistical models. Similarly, the AIC values of the ANN models are smaller to those of the regression models for all the combinations. Consequently, the ANN models have better statistical performances than statistical models for estimating collision frequency. The ANN models presented in this paper are recommended for evaluating the safety impacts 3D alignment elements on horizontal tangents.

Keywords: Collision frequency, horizontal tangent, 3D two-lane highway, negative binomial, zero inflated Poisson, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
1069 Graphic Analysis of Genotype by Environment Interaction for Maize Hybrid Yield Using Site Regression Stability Model

Authors: Saeed Safari Dolatabad, Rajab Choukan

Abstract:

Selection of maize (Zea mays) hybrids with wide adaptability across diverse farming environments is important, prior to recommending them to achieve a high rate of hybrid adoption. Grain yield of 14 maize hybrids, tested in a randomized completeblock design with four replicates across 22 environments in Iran, was analyzed using site regression (SREG) stability model. The biplot technique facilitates a visual evaluation of superior genotypes, which is useful for cultivar recommendation and mega-environment identification. The objectives of this study were (i) identification of suitable hybrids with both high mean performance and high stability (ii) to determine mega-environments for maize production in Iran. Biplot analysis identifies two mega-environments in this study. The first mega-environments included KRM, KSH, MGN, DZF A, KRJ, DRB, DZF B, SHZ B, and KHM, where G10 hybrid was the best performing hybrid. The second mega-environment included ESF B, ESF A, and SHZ A, where G4 hybrid was the best hybrid. According to the ideal-hybrid biplot, G10 hybrid was better than all other hybrids, followed by the G1 and G3 hybrids. These hybrids were identified as best hybrids that have high grain yield and high yield stability. GGE biplot analysis provided a framework for identifying the target testing locations that discriminates genotypes that are high yielding and stable.

Keywords: Zea mays L, GGE biplot, Multi-environment trials, Yield stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
1068 Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches

Authors: H. Bonakdari, I. Ebtehaj

Abstract:

The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.

Keywords: Adaptive neuro-fuzzy inference system, ANFIS, artificial neural network, ANN, bridge pier, scour depth, nonlinear regression, NLR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
1067 Analytical Authentication of Butter Using Fourier Transform Infrared Spectroscopy Coupled with Chemometrics

Authors: M. Bodner, M. Scampicchio

Abstract:

Fourier Transform Infrared (FT-IR) spectroscopy coupled with chemometrics was used to distinguish between butter samples and non-butter samples. Further, quantification of the content of margarine in adulterated butter samples was investigated. Fingerprinting region (1400-800 cm–1) was used to develop unsupervised pattern recognition (Principal Component Analysis, PCA), supervised modeling (Soft Independent Modelling by Class Analogy, SIMCA), classification (Partial Least Squares Discriminant Analysis, PLS-DA) and regression (Partial Least Squares Regression, PLS-R) models. PCA of the fingerprinting region shows a clustering of the two sample types. All samples were classified in their rightful class by SIMCA approach; however, nine adulterated samples (between 1% and 30% w/w of margarine) were classified as belonging both at the butter class and at the non-butter one. In the two-class PLS-DA model’s (R2 = 0.73, RMSEP, Root Mean Square Error of Prediction = 0.26% w/w) sensitivity was 71.4% and Positive Predictive Value (PPV) 100%. Its threshold was calculated at 7% w/w of margarine in adulterated butter samples. Finally, PLS-R model (R2 = 0.84, RMSEP = 16.54%) was developed. PLS-DA was a suitable classification tool and PLS-R a proper quantification approach. Results demonstrate that FT-IR spectroscopy combined with PLS-R can be used as a rapid, simple and safe method to identify pure butter samples from adulterated ones and to determine the grade of adulteration of margarine in butter samples.

Keywords: Adulterated butter, margarine, PCA, PLS-DA, PLS-R, SIMCA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
1066 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
1065 The Relationship between Inventory Management and Profitability: A Comparative Research on Turkish Firms Operated in Weaving Industry, Eatables Industry, Wholesale and Retail Industry

Authors: G. Sekeroglu, M. Altan

Abstract:

Working capital is identified as firm’s all current assets. Inventories which are one of the working capital elements are very important among current assets for firms. Because, profitability is an indicator for firms’ financial success is provided with minimum cost and optimum inventory quantity. So in this study, it is investigated as comparatively that the effect of inventory management on the profitability of Turkish firms which operated in weaving industry, eatables industry, wholesale and retail industry in between 2003 – 2012 years. Research data consist of profitability ratios and inventory turnovers ratio calculated by using balance sheets and income statements of firms which operated in Borsa Istanbul (BIST). In this research, the relationship between inventories and profitability is investigated by using SPSS-20 software with regression and correlation analysis. The results achieved from three industry departments which exist in study interpreted as comparatively. Accordingly, it is determined that there is a positive relationship between inventory management and profitability in eatables industry. However, it was founded that there is no relationship between inventory management and profitability in weaving industry and wholesale and retail industry.

Keywords: Profitability, regression analysis, inventory management, working capital.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7181
1064 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis

Authors: Mussa Mahmoudi

Abstract:

For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).

Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2860
1063 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90
1062 Blood Glucose Level Measurement from Breath Analysis

Authors: Tayyab Hassan, Talha Rehman, Qasim Abdul Aziz, Ahmad Salman

Abstract:

The constant monitoring of blood glucose level is necessary for maintaining health of patients and to alert medical specialists to take preemptive measures before the onset of any complication as a result of diabetes. The current clinical monitoring of blood glucose uses invasive methods repeatedly which are uncomfortable and may result in infections in diabetic patients. Several attempts have been made to develop non-invasive techniques for blood glucose measurement. In this regard, the existing methods are not reliable and are less accurate. Other approaches claiming high accuracy have not been tested on extended dataset, and thus, results are not statistically significant. It is a well-known fact that acetone concentration in breath has a direct relation with blood glucose level. In this paper, we have developed the first of its kind, reliable and high accuracy breath analyzer for non-invasive blood glucose measurement. The acetone concentration in breath was measured using MQ 138 sensor in the samples collected from local hospitals in Pakistan involving one hundred patients. The blood glucose levels of these patients are determined using conventional invasive clinical method. We propose a linear regression classifier that is trained to map breath acetone level to the collected blood glucose level achieving high accuracy.

Keywords: Blood glucose level, breath acetone concentration, diabetes, linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502
1061 Studying the Effects of Economic and Financial Development as well as Institutional Quality on Environmental Destruction in the Upper-Middle Income Countries

Authors: Morteza Raei Dehaghi, Seyed Mohammad Mirhashemi

Abstract:

The current study explored the effect of economic development, financial development and institutional quality on environmental destruction in upper-middle income countries during the time period of 1999-2011. The dependent variable is logarithm of carbon dioxide emissions that can be considered as an index for destruction or quality of the environment given to its effects on the environment. Financial development and institutional development variables as well as some control variables were considered. In order to study cross-sectional correlation among the countries under study, Pesaran and Friz test was used. Since the results of both tests show cross-sectional correlation in the countries under study, seemingly unrelated regression method was utilized for model estimation. The results disclosed that Kuznets’ environmental curve hypothesis is confirmed in upper-middle income countries and also, financial development and institutional quality have a significant effect on environmental quality. The results of this study can be considered by policy makers in countries with different income groups to have access to a growth accompanied by improved environmental quality.

Keywords: Economic Development, Environmental Destruction, Financial Development, Institutional Development, Seemingly Unrelated Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919
1060 Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis

Authors: Farhad Kolahan, A. Hamid Khajavi

Abstract:

Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.

Keywords: AWJ cutting, Mathematical modeling, Simulated Annealing, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2124
1059 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 429
1058 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: Constructal theory, enhanced heat transfer, microchannel, pressure drop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
1057 Hybrid Rocket Motor Performance Parameters: Theoretical and Experimental Evaluation

Authors: A. El-S. Makled, M. K. Al-Tamimi

Abstract:

A mathematical model to predict the performance parameters (thrusts, chamber pressures, fuel mass flow rates, mixture ratios, and regression rates during firing time) of hybrid rocket motor (HRM) is evaluated. The internal ballistic (IB) hybrid combustion model assumes that the solid fuel surface regression rate is controlled only by heat transfer (convective and radiative) from flame zone to solid fuel burning surface. A laboratory HRM is designed, manufactured, and tested for low thrust profile space missions (10-15 N) and for validating the mathematical model (computer program). The polymer material and gaseous oxidizer which are selected for this experimental work are polymethyle-methacrylate (PMMA) and polyethylene (PE) as solid fuel grain and gaseous oxygen (GO2) as oxidizer. The variation of various operational parameters with time is determined systematically and experimentally in firing of up to 20 seconds, and an average combustion efficiency of 95% of theory is achieved, which was the goal of these experiments. The comparison between recording fire data and predicting analytical parameters shows good agreement with the error that does not exceed 4.5% during all firing time. The current mathematical (computer) code can be used as a powerful tool for HRM analytical design parameters.

Keywords: Hybrid combustion, internal ballistics, hybrid rocket motor, performance parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
1056 Climatic Factors Affecting Influenza Cases in Southern Thailand

Authors: S. Youthao, M. Jaroensutasinee, K. Jaroensutasinee

Abstract:

This study investigated climatic factors associated with influenza cases in Southern Thailand. The main aim for use regression analysis to investigate possible causual relationship of climatic factors and variability between the border of the Andaman Sea and the Gulf of Thailand. Southern Thailand had the highest Influenza incidences among four regions (i.e. north, northeast, central and southern Thailand). In this study, there were 14 climatic factors: mean relative humidity, maximum relative humidity, minimum relative humidity, rainfall, rainy days, daily maximum rainfall, pressure, maximum wind speed, mean wind speed, sunshine duration, mean temperature, maximum temperature, minimum temperature, and temperature difference (i.e. maximum – minimum temperature). Multiple stepwise regression technique was used to fit the statistical model. The results indicated that the mean wind speed and the minimum relative humidity were positively associated with the number of influenza cases on the Andaman Sea side. The maximum wind speed was positively associated with the number of influenza cases on the Gulf of Thailand side.

Keywords: Influenza, Climatic Factor, Relative Humidity, Rainfall, Pressure, Wind Speed, sunshine duration, Temperature, Andaman Sea, Gulf of Thailand, Southern Thailand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
1055 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126
1054 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
1053 The Role of Personality Characteristics and Psychological Harassment Behaviors Which Employees Are Exposed on Work Alienation

Authors: H. Serdar Öge, Esra Çiftçi, Kazım Karaboğa

Abstract:

The main purpose of the research is to address the role of psychological harassment behaviors (mobbing) to which employees are exposed and personality characteristics over work alienation. Research population was composed of the employees of Provincial Special Administration. A survey with four sections was created to measure variables and reach out the basic goals of the research. Correlation and step-wise regression analyses were performed to investigate the separate and overall effects of sub-dimensions of psychological harassment behaviors and personality characteristic on work alienation of employees. Correlation analysis revealed significant but weak relationships between work alienation and psychological harassment and personality characteristics. Step-wise regression analysis revealed also significant relationships between work alienation variable and assault to personality, direct negative behaviors (sub dimensions of mobbing) and openness (sub-dimension of personality characteristics). Each variable was introduced into the model step by step to investigate the effects of significant variables in explaining the variations in work alienation. While the explanation ratio of the first model was 13%, the last model including three variables had an explanation ratio of 24%.

Keywords: Alienation, Five-Factor Personality Characteristics, Mobbing, Psychological Harassment, Work Alienation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2231
1052 New Classes of Salagean type Meromorphic Harmonic Functions

Authors: Hakan Bostancı, Metin Öztürk

Abstract:

In this paper, a necessary and sufficient coefficient are given for functions in a class of complex valued meromorphic harmonic univalent functions of the form f = h + g using Salagean operator. Furthermore, distortion theorems, extreme points, convolution condition and convex combinations for this family of meromorphic harmonic functions are obtained.

Keywords: Harmonic mappings, Meromorphic functions, Salagean operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276
1051 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow

Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi

Abstract:

The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.

Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
1050 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
1049 Study of Effect of Removal of Shiftrows and Mixcolumns Stages of AES and AES-KDS on their Encryption Quality and Hence Security

Authors: Krishnamurthy G N, V Ramaswamy

Abstract:

This paper demonstrates the results when either Shiftrows stage or Mixcolumns stage and when both the stages are omitted in the well known block cipher Advanced Encryption Standard(AES) and its modified version AES with Key Dependent S-box(AES-KDS), using avalanche criterion and other tests namely encryption quality, correlation coefficient, histogram analysis and key sensitivity tests.

Keywords: Encryption, Decryption, Avalanche, keysensitivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
1048 Predictive Factors of Exercise Behaviors of Junior High School Students in Chonburi Province

Authors: Tanida Julvanichpong

Abstract:

Exercise has been regarded as a necessary and important aspect to enhance physical performance and psychology health. Body weight statistics of students in junior high school students in Chonburi Province beyond a standard risk of obesity. Promoting exercise among Junior high school students in Chonburi Province, essential knowledge concerning factors influencing exercise is needed. Therefore, this study aims to (1) determine the levels of perceived exercise behavior, exercise behavior in the past, perceived barriers to exercise, perceived benefits of exercise, perceived self-efficacy to exercise, feelings associated with exercise behavior, influence of the family to exercise, influence of friends to exercise, and the perceived influence of the environment on exercise. (2) examine the predicting ability of each of the above factors while including personal factors (sex, educational level) for exercise behavior. Pender’s Health Promotion Model was used as a guide for the study. Sample included 652 students in junior high schools, Chonburi Provience. The samples were selected by Multi-Stage Random Sampling. Data Collection has been done by using self-administered questionnaires. Data were analyzed using descriptive statistics, Pearson’s product moment correlation coefficient, Eta, and stepwise multiple regression analysis. The research results showed that: 1. Perceived benefits of exercise, influence of teacher, influence of environmental, feelings associated with exercise behavior were at a high level. Influence of the family to exercise, exercise behavior, exercise behavior in the past, perceived self-efficacy to exercise and influence of friends were at a moderate level. Perceived barriers to exercise were at a low level. 2. Exercise behavior was positively significant related to perceived benefits of exercise, influence of the family to exercise, exercise behavior in the past, perceived self-efficacy to exercise, influence of friends, influence of teacher, influence of environmental and feelings associated with exercise behavior (p < .01, respectively) and was negatively significant related to educational level and perceived barriers to exercise (p < .01, respectively). Exercise behavior was significant related to sex (Eta = 0.243, p=.000). 3. Exercise behavior in the past, influence of the family to exercise significantly contributed 60.10 percent of the variance to the prediction of exercise behavior in male students (p < .01). Exercise behavior in the past, perceived self-efficacy to exercise, perceived barriers to exercise, and educational level significantly contributed 52.60 percent of the variance to the prediction of exercise behavior in female students (p < .01).

Keywords: Predictive factors, exercise behaviors, junior high school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1157
1047 Experimental Investigation of Heat Transfer and Flow of Nano Fluids in Horizontal Circular Tube

Authors: Abdulhassan Abd. K, Sattar Al-Jabair, Khalid Sultan

Abstract:

We have measured the pressure drop and convective heat transfer coefficient of water – based AL(25nm),AL2O3(30nm) and CuO(50nm) Nanofluids flowing through a uniform heated circular tube in the fully developed laminar flow regime. The experimental results show that the data for Nanofluids friction factor show a good agreement with analytical prediction from the Darcy's equation for single-phase flow. After reducing the experimental results to the form of Reynolds, Rayleigh and Nusselt numbers. The results show the local Nusselt number and temperature have distribution with the non-dimensional axial distance from the tube entry. Study decided that thenNanofluid as Newtonian fluids through the design of the linear relationship between shear stress and the rate of stress has been the study of three chains of the Nanofluid with different concentrations and where the AL, AL2O3 and CuO – water ranging from (0.25 - 2.5 vol %). In addition to measuring the four properties of the Nanofluid in practice so as to ensure the validity of equations of properties developed by the researchers in this area and these properties is viscosity, specific heat, and density and found that the difference does not exceed 3.5% for the experimental equations between them and the practical. The study also demonstrated that the amount of the increase in heat transfer coefficient for three types of Nano fluid is AL, AL2O3, and CuO – Water and these ratios are respectively (45%, 32%, 25%) with insulation and without insulation (36%, 23%, 19%), and the statement of any of the cases the best increase in heat transfer has been proven that using insulation is better than not using it. I have been using three types of Nano particles and one metallic Nanoparticle and two oxide Nanoparticle and a statement, whichever gives the best increase in heat transfer.

Keywords: Newtonian, NUR factor, Brownian motion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837