Search results for: X-Ray intensity measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1598

Search results for: X-Ray intensity measurement

128 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.

Keywords: Enhanced ideal gas molecular movement, ideal gas molecular movement, model updating method, probability-based damage detection, uncertainty quantification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
127 Preparation and Characterization of CuFe2O4/TiO2 Photocatalyst for the Conversion of CO2 into Methanol under Visible Light

Authors: Md. Maksudur Rahman Khan, M. Rahim Uddin, Hamidah Abdullah, Kaykobad Md. Rezaul Karim, Abu Yousuf, Chin Kui Cheng, Huei Ruey Ong

Abstract:

A systematic study was conducted to explore the photocatalytic reduction of carbon dioxide (CO2) into methanol on TiO2 loaded copper ferrite (CuFe2O4) photocatalyst under visible light irradiation. The phases and crystallite size of the photocatalysts were characterized by X-ray diffraction (XRD) and it indicates CuFe2O4 as tetragonal phase incorporation with anatase TiO2 in CuFe2O4/TiO2 hetero-structure. The XRD results confirmed the formation of spinel type tetragonal CuFe2O4 phases along with predominantly anatase phase of TiO2 in the CuFe2O4/TiO2 hetero-structure. UV-Vis absorption spectrum suggested the formation of the hetero-junction with relatively lower band gap than that of TiO2. Photoluminescence (PL) technique was used to study the electron–hole (e/h+) recombination process. PL spectra analysis confirmed the slow-down of the recombination of electron–hole (e/h+) pairs in the CuFe2O4/TiO2 hetero-structure. The photocatalytic performance of CuFe2O4/TiO2 was evaluated based on the methanol yield with varying amount of TiO2 over CuFe2O4 (0.5:1, 1:1, and 2:1) and changing light intensity. The mechanism of the photocatalysis was proposed based on the fact that the predominant species of CO2 in aqueous phase were dissolved CO2 and HCO3- at pH ~5.9. It was evident that the CuFe2O4 could harvest the electrons under visible light irradiation, which could further be injected to the conduction band of TiO2 to increase the life time of the electron and facilitating the reactions of CO2 to methanol. The developed catalyst showed good recycle ability up to four cycles where the loss of activity was ~25%. Methanol was observed as the main product over CuFe2O4, but loading with TiO2 remarkably increased the methanol yield. Methanol yield over CuFe2O4/TiO2 was found to be about three times higher (651 μmol/gcat L) than that of CuFe2O4 photocatalyst. This occurs because the energy of the band excited electrons lies above the redox potentials of the reaction products CO2/CH3OH.

Keywords: Photocatalysis, CuFe2O4/TiO2, band-gap energy, methanol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2140
126 The Clinical Use of Ahmed Valve Implant as an Aqueous Shunt for Control of Uveitic Glaucoma in Dogs

Authors: Khaled M. Ali, M. A. Abdel-Hamid, Ayman A. Mostafa

Abstract:

Objective: Safety and efficacy of Ahmed glaucoma valve implantation for the management of uveitis induced glaucoma evaluated on the five dogs with uncontrollable glaucoma. Materials and Methods: Ahmed Glaucoma Valve (AGV®; New World Medical, Rancho Cucamonga, CA, USA) is a flow restrictive, nonobstructive self-regulating valve system. Preoperative ocular evaluation included direct ophthalmoscopy and measurement of the intraocular pressure (IOP). The implant was examined and primed prior to implantation. The selected site of the valve implantation was the superior quadrant between the superior and lateral rectus muscles. A fornix-based incision was made through the conjunectiva and Tenon’s capsule. A pocket is formed by blunt dissection of Tenon’s capsule from the episclera. The body of the implant was inserted into the pocket with the leading edge of the device around 8-10 mm from the limbus. Results: No post-operative complications were detected in the operated eyes except a persistent corneal edema occupied the upper half of the cornea in one case. Hyphaema was very mild and seen only in two cases which resolved quickly two days after surgery. Endoscopical evaluation for the operated eyes revealed a normal ocular fundus with clearly visible optic papilla, tapetum and retinal blood vessels. No evidence of hemorrhage, infection, adhesions or retinal abnormalities was detected. Conclusion: Ahmed glaucoma valve is safe and effective implant for treatment of uveitic glaucoma in dogs.

Keywords: Ahmed valve, endoscopy, glaucoma, ocular fundus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137
125 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats

Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi

Abstract:

Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.

Keywords: Conjugated linoleic acid, high fat diet, L-carnitine, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943
124 Digital Automatic Gain Control Integrated on WLAN Platform

Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel

Abstract:

In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.

Keywords: WLAN, AGC, RSSI, baseband processor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3949
123 Undergraduates Learning Preferences: A Comparison of Science, Technology and Social Science Academic Disciplines in Relations to Teaching Designs and Strategies

Authors: Salina Budin, Shaira Ismail

Abstract:

Students learn effectively in a learning environment with a suitable teaching approach that matches their learning preferences. The main objective of the study is to examine the learning preferences amongst the students in the Science and Technology (S&T), and Social Science (SS) fields of study at the Universiti Teknologi Mara (UiTM), Pulau Pinang. The measurement instrument is based on the Dunn and Dunn Learning Styles which measure five elements of learning styles; environmental, sociological, emotional, physiological and psychological. Questionnaires are distributed amongst undergraduates in the Faculty of Mechanical Engineering and Faculty of Business Management. The respondents comprise of 131 diploma students of the Faculty of Mechanical Engineering and 111 degree students of the Faculty of Business Management. The results indicate that, both S&T and SS students share a similar learning preferences on the environmental aspect, emotional preferences, motivational level, learning responsibility, persistent level in learning and learning structure. Most of the S&T students are concluded as analytical learners and the majority of SS students are global learners. Both S&T and SS students are concluded as visual learners, preferred to be in an active mobility in a relaxing and enjoying mode with some light of refreshments during the learning process and exhibited reflective characteristics in learning. Obviously, the S&T students are considered as left brain dominant, whereas the SS students are right brain dominant. The findings highlighted that both categories of students exhibited similar learning preferences except on psychological preferences.

Keywords: Learning preferences, Dunn and Dunn learning style, teaching approach, science and technology, social science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
122 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have a higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of a polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical obervation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the nondestructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
121 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analyzed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realized via a twoway coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary Lagrangian-Eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analyzed in the study. The axial velocity at normalized position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, Constricted Artery, Computational Fluid Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2344
120 Influence of Thermo-fluid-dynamic Parameters on Fluidics in an Expanding Thermal Plasma Deposition Chamber

Authors: G. Zuppardi, F. Romano

Abstract:

Technology of thin film deposition is of interest in many engineering fields, from electronic manufacturing to corrosion protective coating. A typical deposition process, like that developed at the University of Eindhoven, considers the deposition of a thin, amorphous film of C:H or of Si:H on the substrate, using the Expanding Thermal arc Plasma technique. In this paper a computing procedure is proposed to simulate the flow field in a deposition chamber similar to that at the University of Eindhoven and a sensitivity analysis is carried out in terms of: precursor mass flow rate, electrical power, supplied to the torch and fluid-dynamic characteristics of the plasma jet, using different nozzles. To this purpose a deposition chamber similar in shape, dimensions and operating parameters to the above mentioned chamber is considered. Furthermore, a method is proposed for a very preliminary evaluation of the film thickness distribution on the substrate. The computing procedure relies on two codes working in tandem; the output from the first code is the input to the second one. The first code simulates the flow field in the torch, where Argon is ionized according to the Saha-s equation, and in the nozzle. The second code simulates the flow field in the chamber. Due to high rarefaction level, this is a (commercial) Direct Simulation Monte Carlo code. Gas is a mixture of 21 chemical species and 24 chemical reactions from Argon plasma and Acetylene are implemented in both codes. The effects of the above mentioned operating parameters are evaluated and discussed by 2-D maps and profiles of some important thermo-fluid-dynamic parameters, as per Mach number, velocity and temperature. Intensity, position and extension of the shock wave are evaluated and the influence of the above mentioned test conditions on the film thickness and uniformity of distribution are also evaluated.

Keywords: Deposition chamber, Direct Simulation Mote Carlo method (DSMC), Plasma chemistry, Rarefied gas dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
119 Convection through Light Weight Timber Constructions with Mineral Wool

Authors: J. Schmidt, O. Kornadt

Abstract:

The major part of light weight timber constructions consists of insulation. Mineral wool is the most commonly used insulation due to its cost efficiency and easy handling. The fiber orientation and porosity of this insulation material enables flowthrough. The air flow resistance is low. If leakage occurs in the insulated bay section, the convective flow may cause energy losses and infiltration of the exterior wall with moisture and particles. In particular the infiltrated moisture may lead to thermal bridges and growth of health endangering mould and mildew. In order to prevent this problem, different numerical calculation models have been developed. All models developed so far have a potential for completion. The implementation of the flow-through properties of mineral wool insulation may help to improve the existing models. Assuming that the real pressure difference between interior and exterior surface is larger than the prescribed pressure difference in the standard test procedure for mineral wool ISO 9053 / EN 29053, measurements were performed using the measurement setup for research on convective moisture transfer “MSRCMT". These measurements show, that structural inhomogeneities of mineral wool effect the permeability only at higher pressure differences, as applied in MSRCMT. Additional microscopic investigations show, that the location of a leak within the construction has a crucial influence on the air flow-through and the infiltration rate. The results clearly indicate that the empirical values for the acoustic resistance of mineral wool should not be used for the calculation of convective transfer mechanisms.

Keywords: convection, convective transfer, infiltration, mineralwool, permeability, resistance, leakage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2142
118 Low Cost IMU \ GPS Integration Using Kalman Filtering for Land Vehicle Navigation Application

Authors: Othman Maklouf, Abdurazag Ghila, Ahmed Abdulla, Ameer Yousef

Abstract:

Land vehicle navigation system technology is a subject of great interest today. Global Positioning System (GPS) is a common choice for positioning in such systems. However, GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation is the implementation of inertial sensors to determine the position and orientation of a vehicle. As such, inertial navigation has unbounded error growth since the error accumulates at each step. Thus in order to contain these errors some form of external aiding is required. The availability of low cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop Inertial Navigation System (INS) using an inertial measurement unit (IMU), in conjunction with GPS to fulfill the demands of such systems. Typically IMU’s are very expensive systems; however this INS will use “low cost” components. Unfortunately with low cost also comes low performance and is the main reason for the inclusion of GPS and Kalman filtering into the system. The aim of this paper is to develop a GPS/MEMS INS integrated system, which is able to provide a navigation solution with accuracy levels appropriate for land vehicle navigation. The primary piece of equipment used was a MEMS-based Crista IMU (from Cloud Cap Technology Inc.) and a Garmin GPS 18 PC (which is both a receiver and antenna). The integration of GPS with INS can be implemented using a Kalman filter in loosely coupled mode. In this integration mode the INS error states, together with any navigation state (position, velocity, and attitude) and other unknown parameters of interest, are estimated using GPS measurements. All important equations regarding navigation are presented along with discussion.

Keywords: GPS, IMU, Kalman Filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7533
117 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere

Authors: Moustafa Osman Mohammed

Abstract:

This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.

Keywords: Air dispersion model, landfill management, spatial analysis, environmental impact and risk assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
116 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667
115 Developing an Instrument to Measure Teachers’ Self-Efficacy of Teaching Innovation Skills

Authors: Huda S. Al-Azmi

Abstract:

There is a growing consensus that adoption of teachers’ self-efficacy measurement tools help to assess teachers’ abilities in specific areas in order to improve their skills. As a result, different instruments to assess teachers’ ability were developed by academics and practitioners. However, many of these instruments focused either on general teaching skills, or on the other hand, were very specific to one subject. As such, these instruments do not offer a tool to measure the ability of teachers in teaching 21st century skills such as innovation skills. Teaching innovation skills helps to prepare students for lives and careers in the 21st century. The purpose of this study is to develop an instrument measuring teachers’ self-efficacy of teaching innovation skills related to the classroom context and evaluating the teachers’ beliefs regarding their ability in teaching innovation skills. To reach this goal, the 16-item instrument measures four dimensions of innovation skills: creativity, critical thinking, communication, and collaboration. 211 secondary-school teachers filled out the survey to quantitatively analyze the quality of the instrument. The instrument’s reliability and item analysis were measured by using jMetrik. The results concluded that the mean of self-efficacy ranged from 3 to 3.6 without extreme high or low self-efficacy scores. The discrimination analysis revealed that one item recorded a negative correlation with the total, and three items recorded low correlation with the total. The reliabilities of items ranged from 0.64 to 0.69 and the instrument needed a couple of revisions before practical use. The study concluded the need to discard one item and revise five items to increase the quality of the instrument for future work.

Keywords: Critical thinking, collaboration, innovation skills, self-efficacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 944
114 A Comparative Study of Single- and Multi-Walled Carbon Nanotube Incorporation to Indium Tin Oxide Electrodes for Solar Cells

Authors: G. Gokceli, O. Eksik, E. Ozkan Zayim, N. Karatepe

Abstract:

Alternative electrode materials for optoelectronic devices have been widely investigated in recent years. Since indium tin oxide (ITO) is the most preferred transparent conductive electrode, producing ITO films by simple and cost-effective solution-based techniques with enhanced optical and electrical properties has great importance. In this study, single- and multi-walled carbon nanotubes (SWCNT and MWCNT) incorporated into the ITO structure to increase electrical conductivity, mechanical strength, and chemical stability. Carbon nanotubes (CNTs) were firstly functionalized by acid treatment (HNO3:H2SO4), and the thermal resistance of CNTs after functionalization was determined by thermogravimetric analysis (TGA). Thin films were then prepared by spin coating technique and characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), four-point probe measurement system and UV-Vis spectrophotometer. The effects of process parameters were compared for ITO, MWCNT-ITO, and SWCNT-ITO films. Two factors including CNT concentration and annealing temperature were considered. The UV-Vis measurements demonstrated that the transmittance of ITO films was 83.58% at 550 nm, which was decreased depending on the concentration of CNT dopant. On the other hand, both CNT dopants provided an enhancement in the crystalline structure and electrical conductivity. Due to compatible diameter and better dispersibility of SWCNTs in the ITO solution, the best result in terms of electrical conductivity was obtained by SWCNT-ITO films with the 0.1 g/L SWCNT dopant concentration and heat-treatment at 550 °C for 1 hour.

Keywords: CNT incorporation, ITO electrode, spin coating, thin film.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826
113 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
112 Developing Well-Being Indicators and Measurement Methods as Illustrated by Projects Aimed at Preventing Obesity in Children

Authors: E. Grochowska-Niedworok, K. Brukało, M. Hadasik, M. Kardas

Abstract:

Consumption of vegetables by school children and adolescents is essential for their normal growth, development and health, but a significant minority of the world's population consumes the right amount of these products. The aim of the study was to evaluate the preferences and frequency of consumption of vegetables by school children and adolescents. It has been assumed that effectively implemented nutrition education programs should have an impact on increasing the frequency of vegetable consumption among the recipients. The study covered 514 students of five schools in the Opole Voivodeship aged 9 years to 22 years. The research tool was an author's questionnaire, which consisted of closed questions on the frequency of vegetable consumption and the use of 10 ways to treat them. Preferences and frequencies are shown in percentages, while correlations were estimated on the basis of Cramer`s V and gamma coefficients. In each of the examined age groups, the relationship between sex and vegetable consumption (the Cramer`s V coefficient value was 0.06 to 0.38) was determined and the various methods of culinary processing were used (V Craméra was 0.08 to 0.34). For both sexes, the relationship between age and frequency of vegetable consumption was shown (gamma values ranged from ~ 0.00 to 0.39) and different cooking methods (gamma values were 0.01 to 0.22). The most important determinant of nutritional choices is the taste and availability of products. The fact that they have a positive effect on their health is only in third position. As has been shown, obesity prevention programs can not only address nutrition education but also teach about new flavors and increase the availability of healthy foods. In addition, the frequency of vegetable consumption can be a good indicator reflecting the healthy behaviors of children and adolescents.

Keywords: Children and adolescents, frequency, welfare rate, vegetables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016
111 The Effect of Socio-Affective Variables in the Relationship between Organizational Trust and Employee Turnover Intention

Authors: Paula A. Cruise, Carvell McLeary

Abstract:

Employee turnover leads to lowered productivity, decreased morale and work quality, and psychological effects associated with employee separation and replacement. Yet, it remains unknown why talented employees willingly withdraw from organizations. This uncertainty is worsened as studies; a) priorities organizational over individual predictors resulting in restriction in range in turnover measurement; b) focus on actual rather than intended turnover thereby limiting conceptual understanding of the turnover construct and its relationship with other variables and; c) produce inconsistent findings across cultures, contexts and industries despite a clear need for a unified perspective. The current study addressed these gaps by adopting the theory of planned behavior (TPB) framework to examine socio-cognitive factors in organizational trust and individual turnover intentions among bankers and energy employees in Jamaica. In a comparative study of n=369 [nbank= 264; male=57 (22.73%); nenergy =105; male =45 (42.86)], it was hypothesized that organizational trust was a predictor of employee turnover intention, and the effect of individual, group, cognitive and socio-affective variables varied across industry. Findings from structural equation modelling confirmed the hypothesis, with a model of both cognitive and socio-affective variables being a better fit [CMIN (χ2) = 800.067, df = 364, p ≤ .000; CFI = 0.950; RMSEA = 0.057 with 90% C.I. (0.052 - 0.062); PCLOSE = 0.016; PNFI = 0.818 in predicting turnover intention. The findings are discussed in relation to socio-cognitive components of trust models and predicting negative employee behaviors across cultures and industries.

Keywords: Context-specific organizational trust, cross-cultural psychology, theory of planned behavior, employee turnover intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
110 User-Perceived Quality Factors for Certification Model of Web-Based System

Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh

Abstract:

One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.

Keywords: Software certification model, user centric approach, software quality factors, metrics and measurements, web-based system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148
109 Improvement of Ventilation and Thermal Comfort Using the Atrium Design for Traditional Folk Houses-Fujian Earthen Building

Authors: Ying-Ming Su

Abstract:

Fujian earthen building which was known as a classic for ecological buildings was listed on the world heritage in 2008 (UNESCO) in China. Its design strategy can be applied to modern architecture planning and design. This study chose two different cases (Round Atrium: Er-Yi Building, Double Round Atrium: Zhen-Chen Building) of earthen building in Fu-Jian to compare the ventilation effects of different atrium forms. We adopt field measurements and computational fluid dynamics (CFD) simulation of temperature, humidity, and wind environment to identify the relationship between external environment and atrium about comfort and to confirm the relationship about atrium H/W (height/width). Results indicate that, through the atrium convection effect, it makes the natural wind guides to each space surrounded and keeps indoor comfort. It illustrates that the smaller the ratio of the H/W which is the relationship between the height and the width of an atrium is, the greater the wind speed generated within the street valley. Moreover, the wind speed is very close to the reference wind speed. This field measurement verifies that the value of H/W has great influence of solar radiation heat and sunshine shadows. The ventilation efficiency is: Er-Yi Building (H/W =0.2778) > Zhen-Chen Building (H/W=0.3670). Comparing the cases with the same shape but with different H/W, through the different size patios, airflow revolves in the atriums and can be brought into each interior space. The atrium settings meet the need of building ventilation, and can adjust the humidity and temperature within the buildings. It also creates good ventilation effect.

Keywords: Traditional folk houses, Atrium, Earthen building, Ventilation, Building microclimate, PET.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
108 Quality of Life Assessment across the Cancer Continuum: Understanding the Role of an Exercise Rehabilitation Programme

Authors: Bernat-Carles Serdà Ferrer, Arantza Del Valle Gómez

Abstract:

The Quality of Life (QoL) paradigm is multidimensional, dynamic and modular and its definition differs across the cancer continuum. The challenge in the interpretation of QoL data in clinical research is that QoL is influenced by psychological phenomena such as adaptation to illness. This research aims to obtain a valid and sensitive assessment of QoL change over the continuum disease, and to evaluate a rehabilitation programme aimed at inverting the observed decrease in QoL when patients return to daily living activities. The sample comprised 66 men. Patients were first assessed to establish a baseline (P1-diagnosis). This was followed by a post-test (P2-discharge) and a then-test measurement (P3-retrospective evaluation) and after returning home patients were randomized in experimental and control groups. The experimental group attended a rehabilitation programme over 24 weeks (P4). Results show that from baseline to post-test, QoL decreased significantly. The recalibration then-test confirmed a low QoL in all periods evaluated. Significant differences between the experimental and control groups prove the positive effect of the Exercise Rehabilitation Programme (ERP) on QoL. Understanding the real dynamic of QoL over time would help to adapt rehabilitation programmes by improving sensitivity and efficacy and provide professionals with a more accurate perception of the impact of treatment and side effects on patients’ QoL. Our results underline the importance of changing the approach adopted by health professionals towards one of watchful waiting on patients’ QoL until their complete recovery in daily life.

Keywords: Prostate cancer, quality of life, rehabilitation programme, response shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1142
107 Self-Healing Phenomenon Evaluation in Cementitious Matrix with Different Water/Cement Ratios and Crack Opening Age

Authors: V. G. Cappellesso, D. M. G. da Silva, J. A. Arndt, N. dos Santos Petry, A. B. Masuero, D. C. C. Dal Molin

Abstract:

Concrete elements are subject to cracking, which can be an access point for deleterious agents that can trigger pathological manifestations reducing the service life of these structures. Finding ways to minimize or eliminate the effects of this aggressive agents’ penetration, such as the sealing of these cracks, is a manner of contributing to the durability of these structures. The cementitious self-healing phenomenon can be classified in two different processes. The autogenous self-healing that can be defined as a natural process in which the sealing of this cracks occurs without the stimulation of external agents, meaning, without different materials being added to the mixture, while on the other hand, the autonomous seal-healing phenomenon depends on the insertion of a specific engineered material added to the cement matrix in order to promote its recovery. This work aims to evaluate the autogenous self-healing of concretes produced with different water/cement ratios and exposed to wet/dry cycles, considering two ages of crack openings, 3 days and 28 days. The self-healing phenomenon was evaluated using two techniques: crack healing measurement using ultrasonic waves and image analysis performed with an optical microscope. It is possible to observe that by both methods, it possible to observe the self-healing phenomenon of the cracks. For young ages of crack openings and lower water/cement ratios, the self-healing capacity is higher when compared to advanced ages of crack openings and higher water/cement ratios. Regardless of the crack opening age, these concretes were found to stabilize the self-healing processes after 80 days or 90 days.

Keywords: Self-healing, autogenous, water/cement ratio, curing cycles, test methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
106 Long-term Monitor of Seawater by using TiO2:Ru Sensing Electrode for Hard Clam Cultivation

Authors: Jung-Chuan Chou, Cheng-Wei Chen

Abstract:

The hard clam (meretrix lusoria) cultivated industry has been developed vigorously for recent years in Taiwan, and seawater quality determines the cultivated environment. The pH concentration variation affects survival rate of meretrix lusoria immediately. In order to monitor seawater quality, solid-state sensing electrode of ruthenium-doped titanium dioxide (TiO2:Ru) is developed to measure hydrogen ion concentration in different cultivated solutions. Because the TiO2:Ru sensing electrode has high chemical stability and superior sensing characteristics, thus it is applied as a pH sensor. Response voltages of TiO2:Ru sensing electrode are readout by instrument amplifier in different sample solutions. Mean sensitivity and linearity of TiO2:Ru sensing electrode are 55.20 mV/pH and 0.999 from pH1 to pH13, respectively. We expect that the TiO2:Ru sensing electrode can be applied to real environment measurement, therefore we collect two sample solutions by different meretrix lusoria cultivated ponds in the Yunlin, Taiwan. The two sample solutions are both measured for 200 seconds after calibration of standard pH buffer solutions (pH7, pH8 and pH 9). Mean response voltages of sample 1 and sample 2 are -178.758 mV (Standard deviation=0.427 mV) and -180.206 mV (Standard deviation =0.399 mV), respectively. Response voltages of the two sample solutions are between pH 8 and pH 9 which conform to weak alkali range and suitable meretrix lusoria growth. For long-term monitoring, drift of cultivated solutions (sample 1 and sample 2) are 1.16 mV/hour and 1.03 mV/hour, respectively.

Keywords: Co-sputtering system, Hard clam (meretrix lusoria), Ruthenium-doped titanium dioxide, Solid-state sensing electrode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
105 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: Canny pruning, hand recognition, machine learning, skin tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1309
104 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: Climate, reanalysis, renewable energy, solar radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
103 Present Status, Driving Forces and Pattern Optimization of Territory in Hubei Province, China

Authors: Tingke Wu, Man Yuan

Abstract:

“National Territorial Planning (2016-2030)” was issued by the State Council of China in 2017. As an important initiative of putting it into effect, territorial planning at provincial level makes overall arrangement of territorial development, resources and environment protection, comprehensive renovation and security system construction. Hubei province, as the pivot of the “Rise of Central China” national strategy, is now confronted with great opportunities and challenges in territorial development, protection, and renovation. Territorial spatial pattern experiences long time evolution, influenced by multiple internal and external driving forces. It is not clear what are the main causes of its formation and what are effective ways of optimizing it. By analyzing land use data in 2016, this paper reveals present status of territory in Hubei. Combined with economic and social data and construction information, driving forces of territorial spatial pattern are then analyzed. Research demonstrates that the three types of territorial space aggregate distinctively. The four aspects of driving forces include natural background which sets the stage for main functions, population and economic factors which generate agglomeration effect, transportation infrastructure construction which leads to axial expansion and significant provincial strategies which encourage the established path. On this basis, targeted strategies for optimizing territory spatial pattern are then put forward. Hierarchical protection pattern should be established based on development intensity control as respect for nature. By optimizing the layout of population and industry and improving the transportation network, polycentric network-based development pattern could be established. These findings provide basis for Hubei Territorial Planning, and reference for future territorial planning in other provinces.

Keywords: Driving forces, Hubei, optimizing strategies, spatial pattern, territory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624
102 Effect of Segregation on the Reaction Rate of Sewage Sludge Pyrolysis in a Bubbling Fluidized Bed

Authors: A. Soria-Verdugo, A. Morato-Godino, L. M. García-Gutiérrez, N. García-Hernando

Abstract:

The evolution of the pyrolysis of sewage sludge in a fixed and a fluidized bed was analyzed using a novel measuring technique. This original measuring technique consists of installing the whole reactor over a precision scale, capable of measuring the mass of the complete reactor with enough precision to detect the mass released by the sewage sludge sample during its pyrolysis. The inert conditions required for the pyrolysis process were obtained supplying the bed with a nitrogen flowrate, and the bed temperature was adjusted to either 500 ºC or 600 ºC using a group of three electric resistors. The sewage sludge sample was supplied through the top of the bed in a batch of 10 g. The measurement of the mass released by the sewage sludge sample was employed to determine the evolution of the reaction rate during the pyrolysis, the total amount of volatile matter released, and the pyrolysis time. The pyrolysis tests of sewage sludge in the fluidized bed were conducted using two different bed materials of the same size but different densities: silica sand and sepiolite particles. The higher density of silica sand particles induces a flotsam behavior for the sewage sludge particles which move close to the bed surface. In contrast, the lower density of sepiolite produces a neutrally-buoyant behavior for the sewage sludge particles, which shows a proper circulation throughout the whole bed in this case. The analysis of the evolution of the pyrolysis process in both fluidized beds show that the pyrolysis is faster when buoyancy effects are negligible, i.e. in the bed conformed by sepiolite particles. Moreover, sepiolite was found to show an absorbent capability for the volatile matter released during the pyrolysis of sewage sludge.

Keywords: Bubbling fluidized bed, pyrolysis time, segregation effects, sewage sludge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
101 Food Security in the Middle East and North Africa

Authors: Sara D. Garduño-Diaz, Philippe Y. Garduño-Diaz

Abstract:

To date, one of the few comprehensive indicators for the measurement of food security is the Global Food Security Index (GFSI). This index is a dynamic quantitative and qualitative benchmarking model, constructed from 28 unique indicators, that measures drivers of food security across both developing and developed countries. Whereas the GFSI has been calculated across a set of 109 countries, in this paper we aim to present and compare, for the Middle East and North Africa (MENA), 1) the Food Security Index scores achieved and 2) the data available on affordability, availability, and quality of food. The data for this work was taken from the latest available report published by the creators of the GFSI, which in turn used information from national and international statistical sources. MENA countries rank from place 17/109 (Israel, although with resent political turmoil this is likely to have changed) to place 91/109 (Yemen) with household expenditure spent in food ranging from 15.5% (Israel) to 60% (Egypt). Lower spending on food as a share of household consumption in most countries and better food safety net programs in the MENA have contributed to a notable increase in food affordability. The region has also, however, experienced a decline in food availability, owing to more limited food supplies and higher volatility of agricultural production. In terms of food quality and safety the MENA has the top ranking country (Israel). The most frequent challenges faced by the countries of the MENA include public expenditure on agricultural research and development as well as volatility of agricultural production. Food security is a complex phenomenon that interacts with many other indicators of a country’s wellbeing; in the MENA it is slowly but markedly improving.

Keywords: Diet, food insecurity, global food security index, nutrition, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3996
100 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: Hip joint centre, motion capture, soft tissue artefact, ultrasound depth measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2861
99 The Fracture Resistance of Zirconia Based Dental Crowns from Cyclic Loading: A Function of Relative Wear Depth

Authors: T. Qasim, B. El Masoud, D. Ailabouni

Abstract:

This in vitro study focused on investigating the fatigue resistance of veneered zirconia molar crowns with different veneering ceramic thicknesses, simulating the relative wear depths under simulated cyclic loading. A mandibular first molar was prepared and then scanned using computer-aided design/computer-aided manufacturing (CAD/CAM) technology to fabricate 32 zirconia copings of uniform 0.5 mm thickness. The manufactured copings then veneered with 1.5 mm, 1.0 mm, 0.5 mm, and 0.0 mm representing 0%, 33%, 66%, and 100% relative wear of a normal ceramic thickness of 1.5 mm. All samples were thermally aged to 6000 thermo-cycles for 2 minutes with distilled water between 5 ˚C and 55 ˚C. The samples subjected to cyclic fatigue and fracture testing using SD Mechatronik chewing simulator. These samples are loaded up to 1.25x10⁶ cycles or until they fail. During fatigue, testing, extensive cracks were observed in samples with 0.5 mm veneering layer thickness. Veneering layer thickness 1.5-mm group and 1.0-mm group were not different in terms of resisting loads necessary to cause an initial crack or final failure. All ceramic zirconia-based crown restorations with varying occlusal veneering layer thicknesses appeared to be fatigue resistant. Fracture load measurement for all tested groups before and after fatigue loading exceeded the clinical chewing forces in the posterior region. In general, the fracture loads increased after fatigue loading and with the increase in the thickness of the occlusal layering ceramic.

Keywords: All ceramic, dental crowns, relative wear, chewing simulator, cyclic loading, thermally ageing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910