Search results for: prediction error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3810

Search results for: prediction error

660 EFL Teachers’ Sequential Self-Led Reflection and Possible Modifications in Their Classroom Management Practices

Authors: Sima Modirkhameneh, Mohammad Mohammadpanah

Abstract:

In the process of EFL teachers’ development, self-led reflection (SLR) is thought to have an imperative role because it may help teachers analyze, evaluate, and contemplate what is happening in their classes. Such contemplations can not only enhance the quality of their instruction and provide better learning environments for learners but also improve the quality of their classroom management (CM). Accordingly, understanding the effect of teachers’ SLR practices may help us gain valuable insights into what possible modifications SLR may bring about in all aspects of EFL teachers' practitioners, especially their CM. The main purpose of this case study was, thus, to investigate the impact of SLR practices of 12 Iranian EFL teachers on their CM based on the universal classroom management checklist (UCMC). In addition, another objective of the current study was to have a clear image of EFL teachers’ perceptions of their own SLR practices and their possible outcomes. By conducting repeated reflective interviews, observations, and feedback of the participants over five teaching sessions, the researcher analyzed the outcomes qualitatively through the process of meaning categorization and data interpretation based on the principles of Grounded Theory. The results demonstrated that EFL teachers utilized SLR practices to improve different aspects of their language teaching skills and CM in different contexts. Almost all participants had positive comments and reactions about the effect of SLR on their CM procedures in different aspects (expectations and routines, behavior-specific praise, error corrections, prompts and precorrections, opportunity to respond, strengths and weaknesses of CM, teachers’ perception, CM ability, and learning process). Otherwise stated, results implied that familiarity with the UCMC criteria and reflective practices contributes to modifying teacher participants’ perceptions about their CM procedure and utilizing the reflective practices in their teaching styles. The results are thought to be valuably beneficial for teachers, teacher educators, and policymakers, who are recommended to pay special attention to the contributions as well as the complexity of reflective teaching. The study concludes with more detailed results and implications and useful directions for future research.

Keywords: classroom management, EFL teachers, reflective practices, self-led reflection

Procedia PDF Downloads 35
659 Assessment of Ocular Morbidity, Knowledge and Barriers to Access Eye Care Services among the Children Live in Offshore Island, Bangladesh

Authors: Abir Dey, Shams Noman

Abstract:

Introduction: Offshore Island is the remote and isolated area from the terrestrial mainland. They are deprived of their needs. The children from an offshore island are usually underserved in the case of health care because it is a remote area where the health care systems are quite poor compared to mainland. So, the proper information is required for appropriate planning to reduce underlying causes behind visual deprivation among the surviving children of the Offshore Island. Purpose: The purpose of this study was to determine ocular morbidities, knowledge, and barriers of eye care services among children in an Offshore Island. Methods: The study team visited, and all data were collected from different rural communities at Sandwip Upazila, Chittagong district for screening the children aged 5-16 years old by doing spot examination. The whole study was conducted in both qualitative and quantitative methods. To determine ocular status of children, examinations were done under skilled Ophthalmologists and Optometrists. A focus group discussion was held. The sample size was 490. It was a community based descriptive study and the sampling method was purposive sampling. Results: In total 490 children, about 56.90% were female and 43.10% were male. Among them 456 were school-going children (93.1%) and 34 were non-school going children (6.9%). In this study the most common ocular morbidity was Allergic Conjunctivitis (35.2%). Other mentionable ocular morbidities were Refractive error (27.7%), Blepharitis (13.8%), Meibomian Gland Dysfunction (7.5%), Strabismus (6.3%) and Amblyopia (6.3%). Most of the non-school going children were involved in different types of domestic work like farming, fishing, etc. About 90.04% children who had different ocular abnormalities could not attend to the doctor due to various reasons. Conclusions: The ocular morbidity was high in rate on the offshore island. Eye health care facility was also not well established there. Awareness should be raised about necessity of maintaining hygiene and eye healthcare among the island people. Timely intervention through available eye care facilities and management can reduce the ocular morbidity rate in that area.

Keywords: morbidities, screening, barriers, offshore island, knowledge

Procedia PDF Downloads 136
658 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study

Authors: Maria I. Nuñez-Hernández, Maria L. Riesco

Abstract:

Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.

Keywords: adolescent, breastfeeding, midwifery, nursing

Procedia PDF Downloads 304
657 Analysis of Dynamics Underlying the Observation Time Series by Using a Singular Spectrum Approach

Authors: O. Delage, H. Bencherif, T. Portafaix, A. Bourdier

Abstract:

The main purpose of time series analysis is to learn about the dynamics behind some time ordered measurement data. Two approaches are used in the literature to get a better knowledge of the dynamics contained in observation data sequences. The first of these approaches concerns time series decomposition, which is an important analysis step allowing patterns and behaviors to be extracted as components providing insight into the mechanisms producing the time series. As in many cases, time series are short, noisy, and non-stationary. To provide components which are physically meaningful, methods such as Empirical Mode Decomposition (EMD), Empirical Wavelet Transform (EWT) or, more recently, Empirical Adaptive Wavelet Decomposition (EAWD) have been proposed. The second approach is to reconstruct the dynamics underlying the time series as a trajectory in state space by mapping a time series into a set of Rᵐ lag vectors by using the method of delays (MOD). Takens has proved that the trajectory obtained with the MOD technic is equivalent to the trajectory representing the dynamics behind the original time series. This work introduces the singular spectrum decomposition (SSD), which is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components. This method takes its origin from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for the analysis and prediction of time series. As the first step of SSD is to constitute a trajectory matrix by embedding a one-dimensional time series into a set of lagged vectors, SSD can also be seen as a reconstruction method like MOD. We will first give a brief overview of the existing decomposition methods (EMD-EWT-EAWD). The SSD method will then be described in detail and applied to experimental time series of observations resulting from total columns of ozone measurements. The results obtained will be compared with those provided by the previously mentioned decomposition methods. We will also compare the reconstruction qualities of the observed dynamics obtained from the SSD and MOD methods.

Keywords: time series analysis, adaptive time series decomposition, wavelet, phase space reconstruction, singular spectrum analysis

Procedia PDF Downloads 88
656 Dutch Disease and Industrial Development: An Investigation of the Determinants of Manufacturing Sector Performance in Nigeria

Authors: Kayode Ilesanmi Ebenezer Bowale, Dominic Azuh, Busayo Aderounmu, Alfred Ilesanmi

Abstract:

There has been a debate among scholars and policymakers about the effects of oil exploration and production on industrial development. In Nigeria, there were many reforms resulting in an increase in crude oil production in the recent past. There is a controversy on the importance of oil production in the development of the manufacturing sector in Nigeria. Some scholars claim that oil has been a blessing to the development of the manufacturing sector, while others regard it as a curse. The objective of the study is to determine if empirical analysis supports the presence of Dutch Disease and de-industrialisation in the Nigerian manufacturing sector between 2019- 2022. The study employed data that were sourced from World Development Indicators, Nigeria Bureau of Statistics, and the Central Bank of Nigeria Statistical Bulletin on manufactured exports, manufacturing employment, agricultural employment, and service employment in line with the theory of Dutch Disease using the unit root test to establish their level of stationarity, Engel and Granger cointegration test to check their long-run relationship. Autoregressive. Distributed Lagged bound test was also used. The Vector Error Correction Model will be carried out to determine the speed of adjustment of the manufacturing export and resource movement effect. The results showed that the Nigerian manufacturing industry suffered from both direct and indirect de-industrialisation over the period. The findings also revealed that there was resource movement as labour moved away from the manufacturing sector to both the oil sector and the services sector. The study concluded that there was the presence of Dutch Disease in the manufacturing industry, and the problem of de-industrialisation led to the crowding out of manufacturing output. The study recommends that efforts should be made to diversify the Nigerian economy. Furthermore, a conducive business environment should be provided to encourage more involvement of the private sector in the agriculture and manufacturing sectors of the economy.

Keywords: Dutch disease, resource movement, manufacturing sector performance, Nigeria

Procedia PDF Downloads 58
655 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 410
654 Influence of Stress Relaxation and Hysteresis Effect for Pressure Garment Design

Authors: Chia-Wen Yeh, Ting-Sheng Lin, Chih-Han Chang

Abstract:

Pressure garment has been used to prevent and treat the hypertrophic scars following serious burns since 1970s. The use of pressure garment is believed to hasten the maturation process and decrease the highness of scars. Pressure garment is custom made by reducing circumferential measurement of the patient by 10%~20%, called Reduction Factor. However the exact reducing value used depends on the subjective judgment of the therapist and the feeling of patients throughout the try and error process. The Laplace Law can be applied to calculate the pressure from the dimension of the pressure garment by the circumferential measurements of the patients and the tension profile of the fabrics. The tension profile currently obtained neglects the stress relaxation and hysteresis effect within most elastic fabrics. The purpose of this study was to investigate the influence of the tension attenuation, from stress relaxation and hysteresis effect of the fabrics. Samples of pressure garment were obtained from Sunshine Foundation Organization, a nonprofit organization for burn patients in Taiwan. The wall tension profile of pressure garments were measured on a material testing system. Specimens were extended to 10% of the original length, held for 1 hour for the influence of the stress relaxation effect to take place. Then, specimens were extended to 15% of the original length for 10 seconds, then reduced to 10% to simulate donning movement for the influence of the hysteresis effect to take place. The load history was recorded. The stress relaxation effect is obvious from the load curves. The wall tension is decreased by 8.5%~10% after 60mins of holding. The hysteresis effect is obvious from the load curves. The wall tension is increased slightly, then decreased by 1.5%~2.5% and lower than stress relaxation results after 60mins of holding. The wall tension attenuation of the fabric exists due to stress relaxation and hysteresis effect. The influence of hysteresis is more than stress relaxation. These effect should be considered in order to design and evaluate the pressure of pressure garment more accurately.

Keywords: hypertrophic scars, hysteresis, pressure garment, stress relaxation

Procedia PDF Downloads 495
653 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation

Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou

Abstract:

An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures

Procedia PDF Downloads 484
652 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 208
651 Theoretical Prediction on the Lifetime of Sessile Evaporating Droplet in Blade Cooling

Authors: Yang Shen, Yongpan Cheng, Jinliang Xu

Abstract:

The effective blade cooling is of great significance for improving the performance of turbine. The mist cooling emerges as the promising way compared with the transitional single-phase cooling. In the mist cooling, the injected droplet will evaporate rapidly, and cool down the blade surface due to the absorbed latent heat, hence the lifetime for evaporating droplet becomes critical for design of cooling passages for the blade. So far there have been extensive studies on the droplet evaporation, but usually the isothermal model is applied for most of the studies. Actually the surface cooling effect can affect the droplet evaporation greatly, it can prolong the droplet evaporation lifetime significantly. In our study, a new theoretical model for sessile droplet evaporation with surface cooling effect is built up in toroidal coordinate. Three evaporation modes are analyzed during the evaporation lifetime, include “Constant Contact Radius”(CCR) mode、“Constant Contact Angle”(CCA) mode and “stick-slip”(SS) mode. The dimensionless number E0 is introduced to indicate the strength of the evaporative cooling, it is defined based on the thermal properties of the liquid and the atmosphere. Our model can predict accurately the lifetime of evaporation by validating with available experimental data. Then the temporal variation of droplet volume, contact angle and contact radius are presented under CCR, CCA and SS mode, the following conclusions are obtained. 1) The larger the dimensionless number E0, the longer the lifetime of three evaporation cases is; 2) The droplet volume over time still follows “2/3 power law” in the CCA mode, as in the isothermal model without the cooling effect; 3) In the “SS” mode, the large transition contact angle can reduce the evaporation time in CCR mode, and increase the time in CCA mode, the overall lifetime will be increased; 4) The correction factor for predicting instantaneous volume of the droplet is derived to predict the droplet life time accurately. These findings may be of great significance to explore the dynamics and heat transfer of sessile droplet evaporation.

Keywords: blade cooling, droplet evaporation, lifetime, theoretical analysis

Procedia PDF Downloads 130
650 Influence of Improved Roughage Quality and Period of Meal Termination on Digesta Load in the Digestive Organs of Goats

Authors: Rasheed A. Adebayo, Mehluli M. Moyo, Ignatius V. Nsahlai

Abstract:

Ruminants are known to relish roughage for productivity but the effect of its quality on digesta load in rumen, omasum, abomasum and other distal organs of the digestive tract is yet unknown. Reticulorumen fill is a strong indicator for long-term control of intake in ruminants. As such, the measurement and prediction of digesta load in these compartments may be crucial to productivity in the ruminant industry. The current study aimed at determining the effect of (a) diet quality on digesta load in digestive organs of goats, and (b) period of meal termination on the reticulorumen fill and digesta load in other distal compartments of the digestive tract of goats. Goats were fed with urea-treated hay (UTH), urea-sprayed hay (USH) and non-treated hay (NTH). At the end of eight weeks of a feeding trial period, upon termination of a meal in the morning, afternoon or evening, all goats were slaughtered in random groups of three per day to measure reticulorumen fill and digesta loads in other distal compartments of the digestive tract. Both diet quality and period affected (P < 0.05) the measure of reticulorumen fill. However, reticulorumen fill in the evening was larger (P < 0.05) than afternoon, while afternoon was similar (P > 0.05) to morning. Also, diet quality affected (P < 0.05) the wet omasal digesta load, wet abomasum, dry abomasum and dry caecum digesta loads but did not affect (P > 0.05) both wet and dry digesta loads in other compartments of the digestive tract. Period of measurement did not affect (P > 0.05) the wet omasal digesta load, and both wet and dry digesta loads in other compartments of the digestive tract except wet abomasum digesta load (P < 0.05) and dry caecum digesta load (P < 0.05). Both wet and dry reticulorumen fill were correlated (P < 0.05) with omasum (r = 0.623) and (r = 0.723), respectively. In conclusion, reticulorumen fill of goats decreased by improving the roughage quality; and the period of meal termination and measurement of the fill is a key factor to the quantity of digesta load.

Keywords: digesta, goats, meal termination, reticulo-rumen fill

Procedia PDF Downloads 354
649 Application of Artificial Neural Network for Single Horizontal Bare Tube and Bare Tube Bundles (Staggered) of Large Particles: Heat Transfer Prediction

Authors: G. Ravindranath, S. Savitha

Abstract:

This paper presents heat transfer analysis of single horizontal bare tube and heat transfer analysis of staggered arrangement of bare tube bundles bare tube bundles in gas-solid (air-solid) fluidized bed and predictions are done by using Artificial Neural Network (ANN) based on experimental data. Fluidized bed provide nearly isothermal environment with high heat transfer rate to submerged objects i.e. due to through mixing and large contact area between the gas and the particle, a fully fluidized bed has little temperature variation and gas leaves at a temperature which is close to that of the bed. Measurement of average heat transfer coefficient was made by local thermal simulation technique in a cold bubbling air-fluidized bed of size 0.305 m. x 0.305 m. Studies were conducted for single horizontal Bare Tube of length 305mm and 28.6mm outer diameter and for bare tube bundles of staggered arrangement using beds of large (average particle diameter greater than 1 mm) particle (raagi and mustard). Within the range of experimental conditions influence of bed particle diameter ( Dp), Fluidizing Velocity (U) were studied, which are significant parameters affecting heat transfer. Artificial Neural Networks (ANNs) have been receiving an increasing attention for simulating engineering systems due to some interesting characteristics such as learning capability, fault tolerance, and non-linearity. Here, feed-forward architecture and trained by back-propagation technique is adopted to predict heat transfer analysis found from experimental results. The ANN is designed to suit the present system which has 3 inputs and 2 out puts. The network predictions are found to be in very good agreement with the experimental observed values of bare heat transfer coefficient (hb) and nusselt number of bare tube (Nub).

Keywords: fluidized bed, large particles, particle diameter, ANN

Procedia PDF Downloads 349
648 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Israel: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Israel using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests significant positive impacts of coal and natural gas consumptions on GDP in Israel. In the short run, GDP positively affects coal consumption. While there exists a positive unidirectional causality running from coal consumption to consumption of petroleum products and the direct combustion of crude oil, there exists a negative unidirectional causality running from natural gas consumption to consumption of petroleum products and the direct combustion of crude oil in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Israel over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Israel, time series analysis

Procedia PDF Downloads 632
647 Temperature and Admixtures Effects on the Maturity of Normal and Super Fine Ground Granulated Blast Furnace Slag Mortars for the Precast Concrete Industry

Authors: Matthew Cruickshank, Chaaruchandra Korde, Roger P. West, John Reddy

Abstract:

Precast concrete element exports are growing in importance in Ireland’s concrete industry and with the increased global focus on reducing carbon emissions, the industry is exploring more sustainable alternatives such as using ground granulated blast-furnace slag (GGBS) as a partial replacement of Portland cement. It is well established that GGBS, with low early age strength development, has limited use in precast manufacturing due to the need for early de-moulding, cutting of pre-stressed strands and lifting. In this dichotomy, the effects of temperature, admixture, are explored to try to achieve the required very early age strength. Testing of the strength of mortars is mandated in the European cement standard, so here with 50% GGBS and Super Fine GGBS, with three admixture conditions (none, conventional accelerator, novel accelerator) and two early age curing temperature conditions (20°C and 35°C), standard mortar strengths are measured at six ages (16 hours, 1, 2, 3, 7, 28 days). The present paper will describe the effort towards developing maturity curves to aid in understanding the effect of these accelerating admixtures and GGBS fineness on slag cement mortars, allowing prediction of their strength with time and temperature. This study is of particular importance to the precast industry where concrete temperature can be controlled. For the climatic conditions in Ireland, heating of precast beds for long hours will amount to an additional cost and also contribute to the carbon footprint of the products. When transitioned from mortar to concrete, these maturity curves are expected to play a vital role in predicting the strength of the GGBS concrete at a very early age prior to demoulding.

Keywords: accelerating admixture, early age strength, ground granulated blast-furnace slag, GGBS, maturity, precast concrete

Procedia PDF Downloads 144
646 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 191
645 TNFRSF11B Gene Polymorphisms A163G and G11811C in Prediction of Osteoporosis Risk

Authors: I. Boroňová, J.Bernasovská, J. Kľoc, Z. Tomková, E. Petrejčíková, D. Gabriková, S. Mačeková

Abstract:

Osteoporosis is a complex health disease characterized by low bone mineral density, which is determined by an interaction of genetics with metabolic and environmental factors. Current research in genetics of osteoporosis is focused on identification of responsible genes and polymorphisms. TNFRSF11B gene plays a key role in bone remodeling. The aim of this study was to investigate the genotype and allele distribution of A163G (rs3102735) osteoprotegerin gene promoter and G1181C (rs2073618) osteoprotegerin first exon polymorphisms in the group of 180 unrelated postmenopausal women with diagnosed osteoporosis and 180 normal controls. Genomic DNA was isolated from peripheral blood leukocytes using standard methodology. Genotyping for presence of different polymorphisms was performed using the Custom Taqman®SNP Genotyping assays. Hardy-Weinberg equilibrium was tested for each SNP in the groups of participants using the chi-square (χ2) test. The distribution of investigated genotypes in the group of patients with osteoporosis were as follows: AA (66.7%), AG (32.2%), GG (1.1%) for A163G polymorphism; GG (19.4%), CG (44.4%), CC (36.1%) for G1181C polymorphism. The distribution of genotypes in normal controls were follows: AA (71.1%), AG (26.1%), GG (2.8%) for A163G polymorphism; GG (22.2%), CG (48.9%), CC (28.9%) for G1181C polymorphism. In A163G polymorphism the variant G allele was more common among patients with osteoporosis: 17.2% versus 15.8% in normal controls. Also, in G1181C polymorphism the phenomenon of more frequent occurrence of C allele in the group of patients with osteoporosis was observed (58.3% versus 53.3%). Genotype and allele distributions showed no significant differences (A163G: χ2=0.270, p=0.605; χ2=0.250, p=0.616; G1181C: χ2= 1.730, p=0.188; χ2=1.820, p=0.177). Our results represents an initial study, further studies of more numerous file and associations studies will be carried out. Knowing the distribution of genotypes is important for assessing the impact of these polymorphisms on various parameters associated with osteoporosis. Screening for identification of “at-risk” women likely to develop osteoporosis and initiating subsequent early intervention appears to be most effective strategy to substantially reduce the risks of osteoporosis.

Keywords: osteoporosis, real-time PCR method, SNP polymorphisms

Procedia PDF Downloads 313
644 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder

Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen

Abstract:

Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.

Keywords: natural language inference, explanation generation, variational auto-encoder, generative model

Procedia PDF Downloads 128
643 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 117
642 Prediction of Positive Cloud-to-Ground Lightning Striking Zones for Charged Thundercloud Based on Line Charge Model

Authors: Surajit Das Barman, Rakibuzzaman Shah, Apurv Kumar

Abstract:

Bushfire is known as one of the ascendant factors to create pyrocumulus thundercloud that causes the ignition of new fires by pyrocumulonimbus (pyroCb) lightning strikes and creates major losses of lives and property worldwide. A conceptual model-based risk planning would be beneficial to predict the lightning striking zones on the surface of the earth underneath the pyroCb thundercloud. PyroCb thundercloud can generate both positive cloud-to-ground (+CG) and negative cloud-to-ground (-CG) lightning in which +CG tends to ignite more bushfires and cause massive damage to nature and infrastructure. In this paper, a simple line charge structured thundercloud model is constructed in 2-D coordinates using the method of image charge to predict the probable +CG lightning striking zones on the earth’s surface for two conceptual thundercloud charge configurations: titled dipole and conventional tripole structure with excessive lower positive charge regions that lead to producing +CG lightning. The electric potential and surface charge density along the earth’s surface for both structures via continuously adjusting the position and the charge density of their charge regions is investigated. Simulation results for tilted dipole structure confirm the down-shear extension of the upper positive charge region in the direction of the cloud’s forward flank by 4 to 8 km, resulting in negative surface density, and would expect +CG lightning to strike within 7.8 km to 20 km around the earth periphery in the direction of the cloud’s forward flank. On the other hand, the conceptual tripole charge structure with enhanced lower positive charge region develops negative surface charge density on the earth’s surface in the range |x| < 6.5 km beneath the thundercloud and highly favors producing +CG lightning strikes.

Keywords: pyrocumulonimbus, cloud-to-ground lightning, charge structure, surface charge density, forward flank

Procedia PDF Downloads 96
641 Odor-Color Association Stroop-Task and the Importance of an Odorant in an Odor-Imagery Task

Authors: Jonathan Ham, Christopher Koch

Abstract:

There are consistently observed associations between certain odors and colors, and there is an association between the ability to imagine vivid visual objects and imagine vivid odors. However, little has been done to investigate how the associations between odors and visual information effect visual processes. This study seeks to understand the relationship between odor imaging, color associations, and visual attention by utilizing a Stroop-task based on common odor-color associations. This Stroop-task was designed using three fruits with distinct odors that are associated with the color of the fruit: lime with green, strawberry with red, and lemon with yellow. Each possible word-color combination was presented in the experimental trials. When the word matched the associated color (lime written in green) it was considered congruent; if it did not, it was considered incongruent (lime written in red or yellow). In experiment I (n = 34) participants were asked to both imagine the odor of the fruit on the screen and identify which fruit it was, and each word-color combination was presented 20 times (a total of 180 trials, with 60 congruent and 120 incongruent instances). Response time and error rate of the participant responses were recorded. There was no significant difference in either measure between the congruent and incongruent trials. In experiment II participants (n = 18) followed the identical procedure as in the previous experiment with the addition of an odorant in the room. The odorant (orange) was not the fruit or color used in the experimental trials. With a fruit-based odorant in the room, the response times (measured in milliseconds) between congruent and incongruent trials were significantly different, with incongruent trials (M = 755.919, SD = 239.854) having significantly longer response times than congruent trials (M = 690.626, SD = 198.822), t (1, 17) = 4.154, p < 0.01. This suggests that odor imagery does affect visual attention to colors, and the ability to inhibit odor-color associations; however, odor imagery is difficult and appears to be facilitated in the presence of a related odorant.

Keywords: odor-color associations, odor imagery, visual attention, inhibition

Procedia PDF Downloads 157
640 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion

Authors: Omran M. Kenshel, Alan J. O'Connor

Abstract:

Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.

Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability

Procedia PDF Downloads 461
639 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers

Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato

Abstract:

The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.

Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence

Procedia PDF Downloads 123
638 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction

Authors: Khaled Barkaoui

Abstract:

Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.

Keywords: second language writing, Fluency, accuracy, complexity, longitudinal

Procedia PDF Downloads 133
637 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers

Procedia PDF Downloads 154
636 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition

Procedia PDF Downloads 138
635 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.

Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence

Procedia PDF Downloads 111
634 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team

Procedia PDF Downloads 134
633 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow

Authors: Musa Akdere, Gunnar Seide, Thomas Gries

Abstract:

Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.

Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface

Procedia PDF Downloads 253
632 Corruption, Institutional Quality and Economic Growth in Nigeria

Authors: Ogunlana Olarewaju Fatai, Kelani Fatai Adeshina

Abstract:

The interplay of corruption and institutional quality determines how effective and efficient an economy progresses. An efficient institutional quality is a key requirement for economic stability. Institutional quality in most cases has been used interchangeably with Governance and these have given room for proxies that legitimized Governance as measures for institutional quality. A poorly-tailored institutional quality has a penalizing effect on corruption and economic growth, while defective institutional quality breeds corruption. Corruption is a hydra-headed phenomenon as it manifests in different forms. The most celebrated definition of corruption is given as “the use or abuse of public office for private benefits or gains”. It also denotes an arrangement between two mutual parties in the determination and allocation of state resources for pecuniary benefits to circumvent state efficiency. This study employed Barro (1990) type augmented model to analyze the nexus among corruption, institutional quality and economic growth in Nigeria using annual time series data, which spanned the period 1996-2019. Within the analytical framework of Johansen Cointegration technique, Error Correction Mechanism (ECM) and Granger Causality tests, findings revealed a long-run relationship between economic growth, corruption and selected measures of institutional quality. The long run results suggested that all the measures of institutional quality except voice & accountability and regulatory quality are positively disposed to economic growth. Moreover, the short-run estimation indicated a reconciliation of the divergent views on corruption which pointed at “sand the wheel” and “grease the wheel” of growth. In addition, regulatory quality and the rule of law indicated a negative influence on economic growth in Nigeria. Government effectiveness and voice & accountability, however, indicated a positive influence on economic growth. The Granger causality test results suggested a one-way causality between GDP and Corruption and also between corruption and institutional quality. Policy implications from this study pointed at checking corruption and streamlining institutional quality framework for better and sustained economic development.

Keywords: institutional quality, corruption, economic growth, public policy

Procedia PDF Downloads 141
631 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 87