Search results for: DCT coefficients
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 885

Search results for: DCT coefficients

705 Emotional Intelligence and Gender Role Attitudes of Married Individuals: Moderating Role of Gender and Work Status

Authors: Saima Kalsoom, Sobia Masood, Muhammad Faran

Abstract:

This study aimed to examine the association between emotional intelligence and gender role attitudes of married individuals. Another aim of this study was to test the moderating role of gender work status of married individuals for predicting gender role attitudes from emotional intelligence. A sample of (N = 500) married working men and women (both working & housewives) was approached through purposive convenience sampling technique. The data was collected employing cross-sectional research design. The indigenous versions of the Gender Role Attitudes Scale and perceived Emotional Intelligence Scale were used. The results of alpha coefficients for both the scales and subscales used in this study designated satisfactory evidence for internal consistency and reliability. Assessment of correlation coefficients showed significant positive correlation between gender role attitudes and emotional intelligence, subfactors of emotional intelligence i.e., emotional self-regulation, emotional self-awareness, and interpersonal skills with gender role attitudes. Results of model testing revealed that gender (the effect was significant for women) and work status (the effect was more significant for married working women than married working men and housewives) of the married individuals significantly moderated the relationship between emotional intelligence and gender role attitudes into the positive direction. Further, it was also found that gender and work status also moderated the relationship between emotional self-regulation (as sub factor of emotional intelligence) and gender role attitudes in a positive direction. In conclusion, this empirical evidence is vital contribution derived from the traditional and collectivistic socio-cultural background of Pakistan.

Keywords: gender role attitudes, emotional intelligence, emotional self-regulation, gender, work status, married working women

Procedia PDF Downloads 70
704 Kinetic Study of Physical Quality Changes on Jumbo Squid (Dosidicus gigas) Slices during Application High-Pressure Impregnation

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Fernanda Marin, Constanza Olivares

Abstract:

This study presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration of jumbo squid (Dosidicus gigas) slice. Diffusion coefficients for both components water and solids were improved by the process pressure, being influenced by pressure level. The working conditions were different pressures such as 100, 250, 400 MPa and pressure atmospheric (0.1 MPa) for time intervals from 30 to 300 seconds and a 15% NaCl concentration. The mathematical expressions used for mass transfer simulations both water and salt were those corresponding to Newton, Henderson and Pabis, Page and Weibull models, where the Weibull and Henderson-Pabis models presented the best fitted to the water and salt experimental data, respectively. The values for water diffusivity coefficients varied from 1.62 to 8.10x10⁻⁹ m²/s whereas that for salt varied among 14.18 to 36.07x10⁻⁹ m²/s for selected conditions. Finally, as to quality parameters studied under the range of experimental conditions studied, the treatment at 250 MPa yielded on the samples a minimum hardness, whereas springiness, cohesiveness and chewiness at 100, 250 and 400 MPa treatments presented statistical differences regarding to unpressurized samples. The colour parameters L* (lightness) increased, however, but b* (yellowish) and a* (reddish) parameters decreased when increasing pressure level. This way, samples presented a brighter aspect and a mildly cooked appearance. The results presented in this study can support the enormous potential of hydrostatic pressure application as a technique important for compounds impregnation under high pressure.

Keywords: colour, diffusivity, high pressure, jumbo squid, modelling, texture

Procedia PDF Downloads 310
703 The Three-Zone Composite Productivity Model of Multi-Fractured Horizontal Wells under Different Diffusion Coefficients in a Shale Gas Reservoir

Authors: Weiyao Zhu, Qian Qi, Ming Yue, Dongxu Ma

Abstract:

Due to the nano-micro pore structures and the massive multi-stage multi-cluster hydraulic fracturing in shale gas reservoirs, the multi-scale seepage flows are much more complicated than in most other conventional reservoirs, and are crucial for the economic development of shale gas. In this study, a new multi-scale non-linear flow model was established and simplified, based on different diffusion and slip correction coefficients. Due to the fact that different flow laws existed between the fracture network and matrix zone, a three-zone composite model was proposed. Then, according to the conformal transformation combined with the law of equivalent percolation resistance, the productivity equation of a horizontal fractured well, with consideration given to diffusion, slip, desorption, and absorption, was built. Also, an analytic solution was derived, and the interference of the multi-cluster fractures was analyzed. The results indicated that the diffusion of the shale gas was mainly in the transition and Fick diffusion regions. The matrix permeability was found to be influenced by slippage and diffusion, which was determined by the pore pressure and diameter according to the Knudsen number. It was determined that, with the increased half-lengths of the fracture clusters, flow conductivity of the fractures, and permeability of the fracture network, the productivity of the fractured well also increased. Meanwhile, with the increased number of fractures, the distance between the fractures decreased, and the productivity slowly increased due to the mutual interference of the fractures. In regard to the fractured horizontal wells, the free gas was found to majorly contribute to the productivity, while the contribution of the desorption increased with the increased pressure differences.

Keywords: multi-scale, fracture network, composite model, productivity

Procedia PDF Downloads 241
702 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 50
701 Mitigation of Wind Loads on a Building Using Small Wind Turbines

Authors: Arindam Chowdhury, Andres Tremante, Mohammadtaghi Moravej, Bodhisatta Hajra, Ioannis Zisis, Peter Irwin

Abstract:

Extreme wind events, such as hurricanes, have caused significant damage to buildings, resulting in losses worth millions of dollars. The roof of a building is most vulnerable to wind-induced damage due to the high suctions experienced by the roof in extreme wind conditions. Wind turbines fitted to buildings can help generate energy, but to our knowledge, their application to wind load mitigation is not well known. This paper presents results from an experimental study to assess the effect of small wind turbines (developed and patented by the first and second authors) on the wind loads on a low rise building roof. The tests were carried out for an open terrain at the Wall of Wind (WOW) experimental facility at Florida International University (FIU), Miami, Florida, USA, for three cases – bare roof, roof fitted with wind turbines placed closer to the roof edges, and roof with wind turbines placed away from the roof edges. Results clearly indicate that the presence of the wind turbines reduced the mean and peak pressure coefficients (less suction) on the roof when compared to the bare deck case. Furthermore, the peak pressure coefficients were found to be lower (less suction) when the wind turbines were placed closer to the roof, than away from the roof. Flow visualization studies using smoke and gravel clearly showed that the presence of the turbines disrupted the formation of vortices formed by cornering winds, thereby reducing roof suctions and preventing lift off of roof coverings. This study shows that the wind turbines besides generating wind energy, can be used for mitigating wind induced damage to the building roof. Future research must be directed towards understanding the effect of these wind turbines on other roof geometries (e.g. hip/gable) in different terrain conditions.

Keywords: wall of wind, wind loads, wind turbine, building

Procedia PDF Downloads 213
700 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 118
699 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 89
698 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 128
697 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland

Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli

Abstract:

This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.

Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges

Procedia PDF Downloads 126
696 Information Communication Technology (ICT) Using Management in Nursing College under the Praboromarajchanok Institute

Authors: Suphaphon Udomluck, Pannathorn Chachvarat

Abstract:

Information Communication Technology (ICT) using management is essential for effective decision making in organization. The Concerns Based Adoption Model (CBAM) was employed as the conceptual framework. The purposes of the study were to assess the situation of Information Communication Technology (ICT) using management in College of Nursing under the Praboromarajchanok Institute. The samples were multi – stage sampling of 10 colleges of nursing that participated include directors, vice directors, head of learning groups, teachers, system administrator and responsible for ICT. The total participants were 280; the instrument used were questionnaires that include 4 parts, general information, Information Communication Technology (ICT) using management, the Stage of concern Questionnaires (SoC), and the Levels of Use (LoU) ICT Questionnaires respectively. Reliability coefficients were tested; alpha coefficients were 0.967for Information Communication Technology (ICT) using management, 0.884 for SoC and 0.945 for LoU. The data were analyzed by frequency, percentage, mean, standard deviation, Pearson Product Moment Correlation and Multiple Regression. They were founded as follows: The high level overall score of Information Communication Technology (ICT) using management and issue were administration, hardware, software, and people. The overall score of the Stage of concern (SoC)ICTis at high level and the overall score of the Levels of Use (LoU) ICTis at moderate. The Information Communication Technology (ICT) using management had the positive relationship with the Stage of concern (SoC)ICTand the Levels of Use (LoU) ICT(p < .01). The results of Multiple Regression revealed that administration hardwear, software and people ware could predict SoC of ICT (18.5%) and LoU of ICT (20.8%).The factors that were significantly influenced by SoCs were people ware. The factors that were significantly influenced by LoU of ICT were administration hardware and people ware.

Keywords: information communication technology (ICT), management, the concerns-based adoption model (CBAM), stage of concern(SoC), the levels of use(LoU)

Procedia PDF Downloads 280
695 Macroeconomic Policy Coordination and Economic Growth Uncertainty in Nigeria

Authors: Ephraim Ugwu, Christopher Ehinomen

Abstract:

Despite efforts by the Nigerian government to harmonize the macroeconomic policy implementations by establishing various committees to resolve disputes between the fiscal and monetary authorities, it is still evident that the federal government had continued its expansionary policy by increasing spending, thus creating huge budget deficit. This study evaluates the effect of macroeconomic policy coordination on economic growth uncertainty in Nigeria from 1980 to 2020. Employing the Auto regressive distributed lag (ARDL) bound testing procedures, the empirical results shows that the error correction term, ECM(-1), indicates a negative sign and is significant statistically with the t-statistic value of (-5.612882 ). Therefore, the gap between long run equilibrium value and the actual value of the dependent variable is corrected with speed of adjustment equal to 77% yearly. The long run coefficient results showed that the estimated coefficients of the intercept term indicates that other things remains the same (ceteris paribus), the economics growth uncertainty will continue reduce by 7.32%. The coefficient of the fiscal policy variable, PUBEXP, indicates a positive sign and significant statistically. This implies that as the government expenditure increases by 1%, economic growth uncertainty will increase by 1.67%. The coefficient of monetary policy variable MS also indicates a positive sign and insignificant statistically. The coefficients of merchandise trade variable, TRADE and exchange rate EXR show negative signs and significant statistically. This indicate that as the country’s merchandise trade and the rate of exchange increases by 1%, the economic growth uncertainty reduces by 0.38% and 0.06%, respectively. This study, therefore, advocate for proper coordination of monetary, fiscal and exchange rate policies in order to actualize the goal of achieving a stable economic growth.

Keywords: macroeconomic, policy coordination, growth uncertainty, ARDL, Nigeria

Procedia PDF Downloads 50
694 Removal of Polycyclic Aromatic Hydrocarbons Present in Tyre Pyrolytic Oil Using Low Cost Natural Adsorbents

Authors: Neha Budhwani

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are formed during the pyrolysis of scrap tyres to produce tyre pyrolytic oil (TPO). Due to carcinogenic, mutagenic, and toxic properties PAHs are priority pollutants. Hence it is essential to remove PAHs from TPO before utilising TPO as a petroleum fuel alternative (to run the engine). Agricultural wastes have promising future to be utilized as biosorbent due to their cost effectiveness, abundant availability, high biosorption capacity and renewability. Various low cost adsorbents were prepared from natural sources. Uptake of PAHs present in tyre pyrolytic oil was investigated using various low-cost adsor¬bents of natural origin including sawdust (shiham), coconut fiber, neem bark, chitin, activated charcol. Adsorption experiments of different PAHs viz. naphthalene, acenaphthalene, biphenyl and anthracene have been carried out at ambient temperature (25°C) and at pH 7. It was observed that for any given PAH, the adsorption capacity increases with the lignin content. Freundlich constant kf and 1/n have been evaluated and it was found that the adsorption isotherms of PAHs were in agreement with a Freundlich model, while the uptake capacity of PAHs followed the order: activated charcoal> saw dust (shisham) > coconut fiber > chitin. The partition coefficients in acetone-water, and the adsorption constants at equilibrium, could be linearly correlated with octanol–water partition coefficients. It is observed that natural adsorbents are good alternative for PAHs removal. Sawdust of Dalbergia sissoo, a by-product of sawmills was found to be a promising adsorbent for the removal of PAHs present in TPO. It is observed that adsorbents studied were comparable to those of some conventional adsorbents.

Keywords: natural adsorbent, PAHs, TPO, coconut fiber, wood powder (shisham), naphthalene, acenaphthene, biphenyl and anthracene

Procedia PDF Downloads 199
693 Surface Thermodynamics Approach to Mycobacterium tuberculosis (M-TB) – Human Sputum Interactions

Authors: J. L. Chukwuneke, C. H. Achebe, S. N. Omenyi

Abstract:

This research work presents the surface thermodynamics approach to M-TB/HIV-Human sputum interactions. This involved the use of the Hamaker coefficient concept as a surface energetics tool in determining the interaction processes, with the surface interfacial energies explained using van der Waals concept of particle interactions. The Lifshitz derivation for van der Waals forces was applied as an alternative to the contact angle approach which has been widely used in other biological systems. The methodology involved taking sputum samples from twenty infected persons and from twenty uninfected persons for absorbance measurement using a digital Ultraviolet visible Spectrophotometer. The variables required for the computations with the Lifshitz formula were derived from the absorbance data. The Matlab software tools were used in the mathematical analysis of the data produced from the experiments (absorbance values). The Hamaker constants and the combined Hamaker coefficients were obtained using the values of the dielectric constant together with the Lifshitz equation. The absolute combined Hamaker coefficients A132abs and A131abs on both infected and uninfected sputum samples gave the values of A132abs = 0.21631x10-21Joule for M-TB infected sputum and Ã132abs = 0.18825x10-21Joule for M-TB/HIV infected sputum. The significance of this result is the positive value of the absolute combined Hamaker coefficient which suggests the existence of net positive van der waals forces demonstrating an attraction between the bacteria and the macrophage. This however, implies that infection can occur. It was also shown that in the presence of HIV, the interaction energy is reduced by 13% conforming adverse effects observed in HIV patients suffering from tuberculosis.

Keywords: absorbance, dielectric constant, hamaker coefficient, lifshitz formula, macrophage, mycobacterium tuberculosis, van der waals forces

Procedia PDF Downloads 239
692 Effect of Temperature on the Binary Mixture of Imidazolium Ionic Liquid with Pyrrolidin-2-One: Volumetric and Ultrasonic Study

Authors: T. Srinivasa Krishna, K. Narendra, K. Thomas, S. S. Raju, B. Munibhadrayya

Abstract:

The densities, speeds of sound and refractive index of the binary mixture of ionic liquid (IL) 1-Butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([BMIM][Imide]) and Pyrrolidin-2-one(PY) was measured at atmospheric pressure, and over the range of temperatures T= (298.15 -323.15)K. The excess molar volume, excess isentropic compressibility, excess speed of sound, partial molar volumes, and isentropic partial molar compressibility were calculated from the values of the experimental density and speed of sound. From the experimental data excess thermal expansion coefficients and isothermal pressure coefficient of excess molar enthalpy at 298.15K were calculated. The results were analyzed and were discussed from the point of view of structural changes. Excess properties were calculated and correlated by the Redlich–Kister and the Legendre polynomial equation and binary coefficients were obtained. Values of excess partial volumes at infinite dilution for the binary system at different temperatures were calculated from the adjustable parameters obtained from Legendre polynomial and Redlich–Kister smoothing equation. Deviation in refractive indices ΔnD and deviation in molar refraction, ΔRm were calculated from the measured refractive index values. Equations of state and several mixing rules were used to predict refractive indices of the binary mixtures and compared with the experimental values by means of the standard deviation and found to be in excellent agreement. By using Prigogine–Flory–Patterson (PFP) theory, the above thermodynamic mixing functions have been calculated and the results obtained from this theory were compared with experimental results.

Keywords: density, refractive index, speeds of sound, Prigogine-Flory-Patterson theory

Procedia PDF Downloads 376
691 The Study of Heat and Mass Transfer for Ferrous Materials' Filtration Drying

Authors: Dmytro Symak

Abstract:

Drying is a complex technologic, thermal and energy process. Energy cost of drying processes in many cases is the most costly stage of production, and can be over 50% of total costs. As we know, in Ukraine over 85% of Portland cement is produced moist, and the finished product energy costs make up to almost 60%. During the wet cement production, energy costs make up over 5500 kJ / kg of clinker, while during the dry only 3100 kJ / kg, that is, switching to a dry Portland cement will allow result into double cutting energy costs. Therefore, to study raw materials drying process in the manufacture of Portland cement is very actual task. The fine ferrous materials drying (small pyrites, red mud, clay Kyoko) is recommended to do by filtration method, that is one of the most intense. The essence of filtration method drying lies in heat agent filtering through a stationary layer of wet material, which is located on the perforated partition, in the "layer-dispersed material - perforated partition." For the optimum drying purposes, it is necessary to establish the dependence of pressure loss in the layer of dispersed material, and the values of heat and mass transfer, depending on the speed of the gas flow filtering. In our research, the experimentally determined pressure loss in the layer of dispersed material was generalized based on dimensionless complexes in the form and coefficients of heat exchange. We also determined the relation between the coefficients of mass and heat transfer. As a result of theoretic and experimental investigations, it was possible to develop a methodology for calculating the optimal parameters for the thermal agent and the main parameters for the filtration drying installation. The comparison of calculated by known operating expenses methods for the process of small pyrites drying in a rotating drum and filtration method shows to save up to 618 kWh per 1,000 kg of dry material and 700 kWh during filtration drying clay.

Keywords: drying, cement, heat and mass transfer, filtration method

Procedia PDF Downloads 226
690 Extension and Closure of a Field for Engineering Purpose

Authors: Shouji Yujiro, Memei Dukovic, Mist Yakubu

Abstract:

Fields are important objects of study in algebra since they provide a useful generalization of many number systems, such as the rational numbers, real numbers, and complex numbers. In particular, the usual rules of associativity, commutativity and distributivity hold. Fields also appear in many other areas of mathematics; see the examples below. When abstract algebra was first being developed, the definition of a field usually did not include commutativity of multiplication, and what we today call a field would have been called either a commutative field or a rational domain. In contemporary usage, a field is always commutative. A structure which satisfies all the properties of a field except possibly for commutativity, is today called a division ring ordivision algebra or sometimes a skew field. Also non-commutative field is still widely used. In French, fields are called corps (literally, body), generally regardless of their commutativity. When necessary, a (commutative) field is called corps commutative and a skew field-corps gauche. The German word for body is Körper and this word is used to denote fields; hence the use of the blackboard bold to denote a field. The concept of fields was first (implicitly) used to prove that there is no general formula expressing in terms of radicals the roots of a polynomial with rational coefficients of degree 5 or higher. An extension of a field k is just a field K containing k as a subfield. One distinguishes between extensions having various qualities. For example, an extension K of a field k is called algebraic, if every element of K is a root of some polynomial with coefficients in k. Otherwise, the extension is called transcendental. The aim of Galois Theory is the study of algebraic extensions of a field. Given a field k, various kinds of closures of k may be introduced. For example, the algebraic closure, the separable closure, the cyclic closure et cetera. The idea is always the same: If P is a property of fields, then a P-closure of k is a field K containing k, having property, and which is minimal in the sense that no proper subfield of K that contains k has property P. For example if we take P (K) to be the property ‘every non-constant polynomial f in K[t] has a root in K’, then a P-closure of k is just an algebraic closure of k. In general, if P-closures exist for some property P and field k, they are all isomorphic. However, there is in general no preferable isomorphism between two closures.

Keywords: field theory, mechanic maths, supertech, rolltech

Procedia PDF Downloads 333
689 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 149
688 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 176
687 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 70
686 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 392
685 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)

Authors: Ayobami Solomon Popoola

Abstract:

Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.

Keywords: blanching, kinetics, sugar losses, water yam

Procedia PDF Downloads 124
684 When Conducting an Analysis of Workplace Incidents, It Is Imperative to Meticulously Calculate Both the Frequency and Severity of Injuries Sustain

Authors: Arash Yousefi

Abstract:

Experts suggest that relying exclusively on parameters to convey a situation or establish a condition may not be adequate. Assessing and appraising incidents in a system based on accident parameters, such as accident frequency, lost workdays, or fatalities, may not always be precise and occasionally erroneous. The frequency rate of accidents is a metric that assesses the correlation between the number of accidents causing work-time loss due to injuries and the total working hours of personnel over a year. Traditionally, this has been calculated based on one million working hours, but the American Occupational Safety and Health Organization has updated its standards. The new coefficient of 200/000 working hours is now used to compute the frequency rate of accidents. It's crucial to ensure that the total working hours of employees are equally represented when calculating individual event and incident numbers. The accident severity rate is a metric used to determine the amount of time lost or wasted during a given period, often a year, in relation to the total number of working hours. It measures the percentage of work hours lost or wasted compared to the total number of useful working hours, which provides valuable insight into the number of days lost or wasted due to work-related incidents for each working hour. Calculating the severity of an incident can be difficult if a worker suffers permanent disability or death. To determine lost days, coefficients specified in the "tables of days equivalent to OSHA or ANSI standards" for disabling injuries are used. The accident frequency coefficient denotes the rate at which accidents occur, while the accident severity coefficient specifies the extent of damage and injury caused by these accidents. These coefficients are crucial in accurately assessing the magnitude and impact of accidents.

Keywords: incidents, safety, analysis, frequency, severity, injuries, determine

Procedia PDF Downloads 50
683 Adhesion Enhancement of Boron Carbide Coatings on Aluminum Substrates Utilizing an Intermediate Adhesive Layer

Authors: Sharon Waichman, Shahaf Froim, Ido Zukerman, Shmuel Barzilai, Shmual Hayun, Avi Raveh

Abstract:

Boron carbide is a ceramic material with superior properties such as high chemical and thermal stability, high hardness and high wear resistance. Moreover, it has a big cross section for neutron absorption and therefore can be employed in nuclear based applications. However, an efficient attachment of boron carbide to a metal such as aluminum can be very challenging, mainly because of the formation of aluminum-carbon bonds that are unstable in humid environment, the affinity of oxygen to the metal and the different thermal expansion coefficients of the two materials that may cause internal stresses and a subsequent failure of the bond. Here, we aimed to achieving a strong and a durable attachment between the boron carbide coating and the aluminum substrate. For this purpose, we applied Ti as a thin intermediate layer that provides a gradual change in the thermal expansion coefficients of the configured layers. This layer is continuous and therefore prevents the formation of aluminum-carbon bonds. Boron carbide coatings with a thickness of 1-5 µm were deposited on the aluminum substrate by pulse-DC magnetron sputtering. Prior to the deposition of the boron carbide layer, the surface was pretreated by energetic ion plasma followed by deposition of the Ti intermediate adhesive layer in a continuous process. The properties of the Ti intermediate layer were adjusted by the bias applied to the substrate. The boron carbide/aluminum bond was evaluated by various methods and complementary techniques, such as SEM/EDS, XRD, XPS, FTIR spectroscopy and Glow Discharge Spectroscopy (GDS), in order to explore the structure, composition and the properties of the layers and to study the adherence mechanism of the boron carbide/aluminum contact. Based on the interfacial bond characteristics, we propose a desirable solution for improved adhesion of boron carbide to aluminum using a highly efficient intermediate adhesive layer.

Keywords: adhesion, boron carbide coatings, ceramic/metal bond, intermediate layer, pulsed-DC magnetron sputtering

Procedia PDF Downloads 133
682 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 295
681 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application

Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior

Abstract:

Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.

Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks

Procedia PDF Downloads 123
680 Sound Analysis of Young Broilers Reared under Different Stocking Densities in Intensive Poultry Farming

Authors: Xiaoyang Zhao, Kaiying Wang

Abstract:

The choice of stocking density in poultry farming is a potential way for determining welfare level of poultry. However, it is difficult to measure stocking densities in poultry farming because of a lot of variables such as species, age and weight, feeding way, house structure and geographical location in different broiler houses. A method was proposed in this paper to measure the differences of young broilers reared under different stocking densities by sound analysis. Vocalisations of broilers were recorded and analysed under different stocking densities to identify the relationship between sounds and stocking densities. Recordings were made continuously for three-week-old chickens in order to evaluate the variation of sounds emitted by the animals at the beginning. The experimental trial was carried out in an indoor reared broiler farm; the audio recording procedures lasted for 5 days. Broilers were divided into 5 groups, stocking density treatments were 8/m², 10/m², 12/m² (96birds/pen), 14/m² and 16/m², all conditions including ventilation and feed conditions were kept same except from stocking densities in every group. The recordings and analysis of sounds of chickens were made noninvasively. Sound recordings were manually analysed and labelled using sound analysis software: GoldWave Digital Audio Editor. After sound acquisition process, the Mel Frequency Cepstrum Coefficients (MFCC) was extracted from sound data, and the Support Vector Machine (SVM) was used as an early detector and classifier. This preliminary study, conducted in an indoor reared broiler farm shows that this method can be used to classify sounds of chickens under different densities economically (only a cheap microphone and recorder can be used), the classification accuracy is 85.7%. This method can predict the optimum stocking density of broilers with the complement of animal welfare indicators, animal productive indicators and so on.

Keywords: broiler, stocking density, poultry farming, sound monitoring, Mel Frequency Cepstrum Coefficients (MFCC), Support Vector Machine (SVM)

Procedia PDF Downloads 114
679 The Influences of Nurses’ Satisfaction on the Patient Satisfaction with and Loyalty to Korean University Hospitals

Authors: Sung Hee Ahn, Ju Rang Han

Abstract:

Background: With increasing importance in healthcare organization on patient satisfaction and nurses’ job satisfaction, many studies have been conducted. But no research has been administered how nurses’ satisfaction with healthcare organization influence patient satisfaction and loyalty. Purpose: This study aims to conceptualize nurses‘ satisfaction, patient satisfaction with and patient loyalty to hospitals using a hypothetical linear structural equation model, and to identify the significance of path coefficients and goodness of fit index of the structural equation model as well. Method: A total of 2,079 nurses and 6,776 patients recruited from 5 university hospitals in South Korea participated in this study. The data on nurses, including ward nurses and outpatient nurses, were collected from June 24th to July 12th, at the 204 departments of the 5 hospitals through an on-line survey. The data on the patients, including both inpatients and outpatients, were collected from September 30th to October 24th, 2013 at the 5 hospitals using a structured questionnaire. The variable of nurses’ satisfaction was measured using a scale evaluating internal client satisfaction, which is used in SSM Health Care System in the US. Patient satisfaction with the hospital and nurses and patient loyalty were measured by assessing the patient’s intention to revisit and to recommending the hospital to others using a visual analogue scale. The data were analyzed using SPSS version 21.0 and AMOS version 21.0. Result: The hypothetical model was fairly good in terms of goodness of fit (χ2= 64.897 (df=24, p <. 001), GFI=. 906, AGFI=.823, CFI=.921, NFI=.951, NNFI=.952. RMSEA=.114). The significance of path coefficients includes followings 1)The nurses’ satisfaction has significant influence on the patient satisfaction with nurses. 2)The patient satisfaction with nurses has significant influence on the patient satisfaction with the hospital. 3)The patient satisfaction with the hospital has significant influence on the patients’ revisit intention. 4)The patient satisfaction with the hospital has significant influence on the patients’ intention to the recommendations of the hospital. Conclusion: These results provide several practical implications to hospital administrators, who should incorporate ways of improving nurses' and patients' satisfaction with the hospital into their health care marketing strategies.

Keywords: linear structural equation model, loyalty, nurse, patient satisfaction

Procedia PDF Downloads 415
678 Solution of Nonlinear Fractional Programming Problem with Bounded Parameters

Authors: Mrinal Jana, Geetanjali Panda

Abstract:

In this paper a methodology is developed to solve a nonlinear fractional programming problem in which the coefficients of the objective function and constraints are interval parameters. This model is transformed into a general optimization problem and relation between the original problem and the transformed problem is established. Finally the proposed methodology is illustrated through a numerical example.

Keywords: fractional programming, interval valued function, interval inequalities, partial order relation

Procedia PDF Downloads 479
677 Experimental Investigation on the Effect of Cross Flow on Discharge Coefficient of an Orifice

Authors: Mathew Saxon A, Aneeh Rajan, Sajeev P

Abstract:

Many fluid flow applications employ different types of orifices to control the flow rate or to reduce the pressure. Discharge coefficients generally vary from 0.6 to 0.95 depending on the type of the orifice. The tabulated value of discharge coefficients of various types of orifices available can be used in most common applications. The upstream and downstream flow condition of an orifice is hardly considered while choosing the discharge coefficient of an orifice. But literature shows that the discharge coefficient can be affected by the presence of cross flow. Cross flow is defined as the condition wherein; a fluid is injected nearly perpendicular to a flowing fluid. Most researchers have worked on water being injected into a cross-flow of water. The present work deals with water to gas systems in which water is injected in a normal direction into a flowing stream of gas. The test article used in the current work is called thermal regulator, which is used in a liquid rocket engine to reduce the temperature of hot gas tapped from the gas generator by injecting water into the hot gas so that a cooler gas can be supplied to the turbine. In a thermal regulator, water is injected through an orifice in a normal direction into the hot gas stream. But the injection orifice had been calibrated under backpressure by maintaining a stagnant gas medium at the downstream. The motivation of the present study aroused due to the observation of a lower Cd of the orifice in flight compared to the calibrated Cd. A systematic experimental investigation is carried out in this paper to study the effect of cross-flow on the discharge coefficient of an orifice in water to a gas system. The study reveals that there is an appreciable reduction in the discharge coefficient with cross flow compared to that without cross flow. It is found that the discharge coefficient greatly depends on the ratio of momentum of water injected to the momentum of the gas cross flow. The effective discharge coefficient of different orifices was normalized using the discharge coefficient without cross-flow and it is observed that normalized curves of effective discharge coefficient of different orifices with momentum ratio collapsing into a single curve. Further, an equation is formulated using the test data to predict the effective discharge coefficient with cross flow using the calibrated Cd value without cross flow.

Keywords: cross flow, discharge coefficient, orifice, momentum ratio

Procedia PDF Downloads 103
676 Bi-Dimensional Spectral Basis

Authors: Abdelhamid Zerroug, Mlle Ismahene Sehili

Abstract:

Spectral methods are usually applied to solve uni-dimensional boundary value problems. With the advantage of the creation of multidimensional basis, we propose a new spectral method for bi-dimensional problems. In this article, we start by creating bi-spectral basis by different ways, we developed also a new relations to determine the expressions of spectral coefficients in different partial derivatives expansions. Finally, we propose the principle of a new bi-spectral method for the bi-dimensional problems.

Keywords: boundary value problems, bi-spectral methods, bi-dimensional Legendre basis, spectral method

Procedia PDF Downloads 343