Search results for: thermal error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5373

Search results for: thermal error

1083 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 389
1082 Advantages of Multispectral Imaging for Accurate Gas Temperature Profile Retrieval from Fire Combustion Reactions

Authors: Jean-Philippe Gagnon, Benjamin Saute, Stéphane Boubanga-Tombet

Abstract:

Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. However, it is well known that most combustion gases such as carbon dioxide (CO₂), water vapor (H₂O), and carbon monoxide (CO) selectively absorb/emit infrared radiation at discrete energies, i.e., over a very narrow spectral range. Therefore, temperature profiles of most combustion processes derived from conventional broadband imaging are inaccurate without prior knowledge or assumptions about the spectral emissivity properties of the combustion gases. Using spectral filters allows estimating these critical emissivity parameters in addition to providing selectivity regarding the chemical nature of the combustion gases. However, due to the turbulent nature of most flames, it is crucial that such information be obtained without sacrificing temporal resolution. For this reason, Telops has developed a time-resolved multispectral imaging system which combines a high-performance broadband camera synchronized with a rotating spectral filter wheel. In order to illustrate the benefits of using this system to characterize combustion experiments, measurements were carried out using a Telops MS-IR MW on a very simple combustion system: a wood fire. The temperature profiles calculated using the spectral information from the different channels were compared with corresponding temperature profiles obtained with conventional broadband imaging. The results illustrate the benefits of the Telops MS-IR cameras for the characterization of laminar and turbulent combustion systems at a high temporal resolution.

Keywords: infrared, multispectral, fire, broadband, gas temperature, IR camera

Procedia PDF Downloads 143
1081 Comparison of Effect of Promoter and K Addition of Co₃O₄ for N₂O Decomposition Reaction

Authors: R. H. Hwang, J. H. Park, K. B. Yi

Abstract:

Nitrous oxide (N2O) is now distinguished as an environmental pollutant. N2O is one of the representative greenhouse gases and N2O is produced by both natural and anthropogenic sources. So, it is very important to reduce N2O. N2O abatement processes are various processes such as HC-SCR, NH3-SCR and decomposition process. Among them, decomposition process is advantageous because it does not use a reducing agent. N2O decomposition is a reaction in which N2O is decomposed into N2 and O2. There are noble metals, transition metal ion-exchanged zeolites, pure and mixed oxides for N2O decomposition catalyst. Among the various catalysts, cobalt-based catalysts derived from hydrotalcites gathered much attention because spinel catalysts having large surface areas and high thermal stabilities. In this study, the effect of promoter and K addition on the activity was compared and analyzed. Co3O4 catalysts for N2O decomposition were prepared by co- precipitation method. Ce and Zr were added during the preparation of the catalyst as promoter with the molar ratio (Ce or Zr) / Co = 0.05. In addition, 1 wt% K2CO3 was doped to the prepared catalyst with impregnation method to investigate the effect of K on the catalyst performance. Characterizations of catalysts were carried out with SEM, BET, XRD, XPS and H2-TPR. The catalytic activity tests were carried out at a GHSV of 45,000 h-1 and a temperature range of 250 ~ 375 ℃. The Co3O4 catalysts showed a spinel crystal phase, and the addition of the promoter increased the specific surface area and reduced the particle and crystal size. It was exhibited that the doping of K improves the catalytic activity by increasing the concentration of Co2+ in the catalyst which is an active site for catalytic reaction. As a result, the K-doped catalyst showed higher activity than the promoter added. Also, it was found through experiments that Co2+ concentration and reduction temperature greatly affect the reactivity.

Keywords: Co₃O4, K-doped, N₂O decomposition, promoter

Procedia PDF Downloads 169
1080 Developing Value Chain of Synthetic Methane for Net-zero Carbon City Gas Supply in Japan

Authors: Ryota Kuzuki, Mitsuhiro Kohara, Noboru Kizuki, Satoshi Yoshida, Hidetaka Hirai, Yuta Nezasa

Abstract:

About fifty years have passed since Japan's gas supply industry became the first in the world to switch from coal and oil to LNG as a city gas feedstock. Since the Japanese government target of net-zero carbon emission in 2050 was announced in October 2020, it has now entered a new era of challenges to commit to the requirement for decarbonization. This paper describes the situation that synthetic methane, produced from renewable energy-derived hydrogen and recycled carbon, is a promising national policy of transition toward net-zero society. In November 2020, the Japan Gas Association announced the 'Carbon Neutral Challenge 2050' as a vision to contribute to the decarbonization of society by converting the city gas supply to carbon neutral. The key technologies is methanation. This paper shows that methanation is a realistic solution to contribute to the decarbonization of the whole country at a lower social cost, utilizing the supply chain that already exists, from LNG plants to burner chips. The challenges during the transition period (2030-2050), as CO2 captured from exhaust of thermal power plants and industrial factories are expected to be used, it is proposed that a system of guarantee of origin (GO) for H2 and CO2 should be established and harmonize international rules for calculating and allocating greenhouse gas emissions in the supply chain, a platform is also needed to manage tracking information on certified environmental values.

Keywords: synthetic methane, recycled carbon fuels, methanation, transition period, environmental value transfer platform

Procedia PDF Downloads 108
1079 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 460
1078 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 136
1077 Applicability of Overhangs for Energy Saving in Existing High-Rise Housing in Different Climates

Authors: Qiong He, S. Thomas Ng

Abstract:

Upgrading the thermal performance of building envelope of existing residential buildings is an effective way to reduce heat gain or heat loss. Overhang device is a common solution for building envelope improvement as it can cut down solar heat gain and thereby can reduce the energy used for space cooling in summer time. Despite that, overhang can increase the demand for indoor heating in winter due to its function of lowering the solar heat gain. Obviously, overhang has different impacts on energy use in different climatic zones which have different energy demand. To evaluate the impact of overhang device on building energy performance under different climates of China, an energy analysis model is built up in a computer-based simulation program known as DesignBuilder based on the data of a typical high-rise residential building. The energy simulation results show that single overhang is able to cut down around 5% of the energy consumption of the case building in the stand-alone situation or about 2% when the building is surrounded by other buildings in regions which predominantly rely on space cooling though it has no contribution to energy reduction in cold region. In regions with cold summer and cold winter, adding overhang over windows can cut down around 4% and 1.8% energy use with and without adjoining buildings, respectively. The results indicate that overhang might not an effective shading device to reduce the energy consumption in the mixed climate or cold regions.

Keywords: overhang, energy analysis, computer-based simulation, design builder, high-rise residential building, climate, BIM model

Procedia PDF Downloads 363
1076 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor

Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira

Abstract:

The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.

Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis

Procedia PDF Downloads 75
1075 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach

Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic

Abstract:

The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.

Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning

Procedia PDF Downloads 185
1074 MXene-Based Self-Sensing of Damage in Fiber Composites

Authors: Latha Nataraj, Todd Henry, Micheal Wallock, Asha Hall, Christine Hatter, Babak Anasori, Yury Gogotsi

Abstract:

Multifunctional composites with enhanced strength and toughness for superior damage tolerance are essential for advanced aerospace and military applications. Detection of structural changes prior to visible damage may be achieved by incorporating fillers with tunable properties such as two-dimensional (2D) nanomaterials with high aspect ratios and more surface-active sites. While 2D graphene with large surface areas, good mechanical properties, and high electrical conductivity seems ideal as a filler, the single-atomic thickness can lead to bending and rolling during processing, requiring post-processing to bond to polymer matrices. Lately, an emerging family of 2D transition metal carbides and nitrides, MXenes, has attracted much attention since their discovery in 2011. Metallic electronic conductivity and good mechanical properties, even with increased polymer content, coupled with hydrophilicity make MXenes a good candidate as a filler material in polymer composites and exceptional as multifunctional damage indicators in composites. Here, we systematically study MXene-based (Ti₃C₂) coated on glass fibers for fiber reinforced polymer composite for self-sensing using microscopy and micromechanical testing. Further testing is in progress through the investigation of local variations in optical, acoustic, and thermal properties within the damage sites in response to strain caused by mechanical loading.

Keywords: damage sensing, fiber composites, MXene, self-sensing

Procedia PDF Downloads 120
1073 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation

Procedia PDF Downloads 312
1072 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 112
1071 Levels of CTX1 in Premenopausal Osteoporotic Women Study Conducted in Khyberpuktoonkhwa Province, Pakistan

Authors: Mehwish Durrani, Rubina Nazli, Muhammad Abubakr, Muhammad Shafiq

Abstract:

Objectives: To evaluate the high socio-economic status, urbanization, and decrease ambulation can lead to early osteoporosis in women reporting from Peshawar region. Study Design: Descriptive cross-sectional study was done. Sample size was 100 subjects, using 30% proportion of osteoporosis, 95% confidence level, and 9% margin of error under WHO software for sample size determination. Place and Duration of study: This study was carried out in the tertiary referral health care facilities of Peshawar viz PGMI Hayatabad Medical Complex, Peshawar, Khyber Pakhtunkhwa Province, Pakistan. Ethical approval for the study was taken from the Institutional Ethical Research board (IERD) at Post Graduate Medical Institute, Hayatabad Medical Complex, and Peshawar.The study was done in six months time period. Patients and Methods: Levels of CTX1 as a marker of bone degradation in radiographically assessed perimenopausal women was determined. These females were randomly selected and screened for osteoporosis. Hemoglobin in gm/dl, ESR by Westergren method as millimeter in 1 hour, Serum Ca mg/dl, Serum alkaline Phosphatase international units per liter radiographic grade of osteoporosis according to Singh index as 1-6 and CTX 1 level in pg/ml. Results: High levels of CTX1 was observed in perimenopausal osteoporotic women which were radiographically diagnosed as osteoporotic patients. The High socio-economic class also predispose to osteoporosis. Decrease ambulation another risk factor showed significant association with the increased levels of CTX1. Conclusion: The results of this study propose that minimum ambulation and high socioeconomic class both had significance association with the increase levels of serum CTX1, which in turn will lead to osteoporosis and to its complications.

Keywords: osteoporosis, CTX1, perimenopausal women, Hayatabad Medical Complex, Khyberpuktoonkhwa

Procedia PDF Downloads 331
1070 Thermally Stable Nanocrystalline Aluminum Alloys Processed by Mechanical Alloying and High Frequency Induction Heat Sintering

Authors: Hany R. Ammar, Khalil A. Khalil, El-Sayed M. Sherif

Abstract:

The as-received metal powders were used to synthesis bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys using mechanical alloying and high frequency induction heat sintering (HFIHS). The current study investigated the influence of milling time and ball-to-powder (BPR) weight ratio on the microstructural constituents and mechanical properties of the processed materials. Powder consolidation was carried out using a high frequency induction heat sintering where the processed metal powders were sintered into a dense and strong bulk material. The sintering conditions applied in this process were as follow: heating rate of 350°C/min; sintering time of 4 minutes; sintering temperature of 400°C; applied pressure of 750 Kgf/cm2 (100 MPa); cooling rate of 400°C/min and the process was carried out under vacuum of 10-3 Torr. The powders and the bulk samples were characterized using XRD and FEGSEM techniques. The mechanical properties were evaluated at various temperatures of 25°C, 100°C, 200°C, 300°C and 400°C to study the thermal stability of the processed alloys. The bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys displayed extremely high hardness values even at elevated temperatures. The Al-10%Cu-5%Ti alloy displayed the highest hardness values at room and elevated temperatures which are related to the presence of Ti-containing phases such as Al3Ti and AlCu2Ti, these phases are thermally stable and retain the high hardness values at elevated temperatures up to 400ºC.

Keywords: nanocrystalline aluminum alloys, mechanical alloying, hardness, elevated temperatures

Procedia PDF Downloads 454
1069 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 299
1068 Control of Airborne Aromatic Hydrocarbons over TiO2-Carbon Nanotube Composites

Authors: Joon Y. Lee, Seung H. Shin, Ho H. Chun, Wan K. Jo

Abstract:

Poly vinyl acetate (PVA)-based titania (TiO2)–carbon nanotube composite nanofibers (PVA-TCCNs) with various PVA-to-solvent ratios and PVA-based TiO2 composite nanofibers (PVA-TN) were synthesized using an electrospinning process, followed by thermal treatment. The photocatalytic activities of these nanofibers in the degradation of airborne monocyclic aromatics under visible-light irradiation were examined. This study focuses on the application of these photocatalysts to the degradation of the target compounds at sub-part-per-million indoor air concentrations. The characteristics of the photocatalysts were examined using scanning electron microscopy, X-ray diffraction, ultraviolet-visible spectroscopy, and Fourier-transform infrared spectroscopy. For all the target compounds, the PVA-TCCNs showed photocatalytic degradation efficiencies superior to those of the reference PVA-TN. Specifically, the average photocatalytic degradation efficiencies for benzene, toluene, ethyl benzene, and o-xylene (BTEX) obtained using the PVA-TCCNs with a PVA-to-solvent ratio of 0.3 (PVA-TCCN-0.3) were 11%, 59%, 89%, and 92%, respectively, whereas those observed using PVA-TNs were 5%, 9%, 28%, and 32%, respectively. PVA-TCCN-0.3 displayed the highest photocatalytic degradation efficiency for BTEX, suggesting the presence of an optimal PVA-to-solvent ratio for the synthesis of PVA-TCCNs. The average photocatalytic efficiencies for BTEX decreased from 11% to 4%, 59% to 18%, 89% to 37%, and 92% to 53%, respectively, when the flow rate was increased from 1.0 to 4.0 L min1. In addition, the average photocatalytic efficiencies for BTEX increased 11% to ~0%, 59% to 3%, 89% to 7%, and 92% to 13% , respectively, when the input concentration increased from 0.1 to 1.0 ppm. The prepared PVA-TCCNs were effective for the purification of airborne aromatics at indoor concentration levels, particularly when the operating conditions were optimized.

Keywords: mixing ratio, nanofiber, polymer, reference photocatalyst

Procedia PDF Downloads 377
1067 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins

Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi

Abstract:

A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.

Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid

Procedia PDF Downloads 265
1066 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks

Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain

Abstract:

With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.

Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic

Procedia PDF Downloads 545
1065 Bulk/Hull Cavitation Induced by Underwater Explosion: Effect of Material Elasticity and Surface Curvature

Authors: Wenfeng Xie

Abstract:

Bulk/hull cavitation evolution induced by an underwater explosion (UNDEX) near a free surface (bulk) or a deformable structure (hull) is numerically investigated using a multiphase compressible fluid solver coupled with a one-fluid cavitation model. A series of two-dimensional computations is conducted with varying material elasticity and surface curvature. Results suggest that material elasticity and surface curvature influence the peak pressures generated from UNDEX shock and cavitation collapse, as well as the bulk/hull cavitation regions near the surface. Results also show that such effects can be different for bulk cavitation generated from UNDEX-free surface interaction and for hull cavitation generated from UNDEX-structure interaction. More importantly, results demonstrate that shock wave focusing caused by a concave solid surface can lead to a larger cavitation region and thus intensify the cavitation reload. The findings can be linked to the strength and the direction of reflected waves from the structural surface and reflected waves from the expanding bubble surface, which are functions of material elasticity and surface curvature. Shockwave focusing effects are also observed for axisymmetric simulations, but the strength of the pressure contours for the axisymmetric simulations is less than those for the 2D simulations due to the difference between the initial shock energy. The current method is limited to two-dimensional or axisymmetric applications. Moreover, the thermal effects are neglected and the liquid is not allowed to sustain tension in the cavitation model.

Keywords: cavitation, UNDEX, fluid-structure interaction, multiphase

Procedia PDF Downloads 185
1064 Early Prediction of Diseases in a Cow for Cattle Industry

Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan

Abstract:

In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.

Keywords: IoT, machine learning, health care, dairy cows

Procedia PDF Downloads 70
1063 Biodegradable Polymer Film Incorporated with Polyphenols for Active Packaging

Authors: Shubham Sharma, Swarna Jaiswal, Brendan Duffy, Amit Jaiswal

Abstract:

The key features of any active packaging film are its biodegradability and antimicrobial properties. Biological macromolecules such as polyphenols (ferulic acid (FA) and tannic acids (TA)) are naturally found in plants such as grapes, berries, and tea. In this study, antimicrobial activity screening of several polyphenols was carried out by using minimal inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) against two strains of gram-negative bacteria - Salmonella typhimurium, Escherichia coli, and two-gram positive strains - Staphylococcus aureus and Listeria monocytogenes. FA and TA had shown strong antibacterial activity at the low concentration against both gram-positive and gram-negative bacteria. The selected polyphenols FA and TA were incorporated at various concentrations (1%, 5%, and 10% w/w) in the poly(lactide) – poly (butylene adipate-co-terephthalate) (PLA-PBAT) composite film by using the solvent casting method. The effect of TA and FA incorporation in the packaging was characterized based on morphological, optical, color, mechanical, thermal, and antimicrobial properties. The thickness of the FA composite film was increased by 1.5 – 7.2%, while for TA composite film, it increased by 0.018 – 1.6%. FA and TA (10 wt%) composite film had shown approximately 65% - 66% increase in the UV barrier property. As the FA and TA concentration increases from 1% - 10% (w/w), the TS value increases by 1.98 and 1.80 times, respectively. The water contact angle of the film was observed to decrease significantly with the increase in the FA and TA content in the composite film. FA has shown more significant increase in antimicrobial activity than TA in the composite film against Listeria monocytogenes and E. coli. The FA and TA composite film has the potential for its application as an active food packaging.

Keywords: active packaging, biodegradable film, polyphenols, UV barrier, tensile strength

Procedia PDF Downloads 152
1062 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 240
1061 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 90
1060 Hydrogen Production Through Thermocatalytic Decomposition of Methane Over Biochar

Authors: Seyed Mohamad Rasool Mirkarimi, David Chiaramonti, Samir Bensaid

Abstract:

Catalytic methane decomposition (CMD, reaction 4) is a one-step process for hydrogen production where carbon in the methane molecule is sequestered in the form of stable and higher-value carbon materials. Metallic catalysts and carbon-based catalysts are two major types of catalysts utilized for the CDM process. Although carbon-based catalysts have lower activity compared to metallic ones, they are less expensive and offer high thermal stability and strong resistance to chemical impurities such as sulfur. Also, it would require less costly separation methods as some of the carbon-based catalysts may not have an active metal component in them. Since the regeneration of metallic catalysts requires burning of the C on their surfaces, which emits CO/CO2, in some cases, using carbon-based catalysts would be recommended because regeneration can be completely avoided, and the catalyst can be directly used in other processes. This work focuses on the effect of biochar as a carbon-based catalyst for the conversion of methane into hydrogen and carbon. Biochar produced from the pyrolysis of poplar wood and activated biochar are used as catalysts for this process. In order to observe the impact of carbon-based catalysts on methane conversion, methane cracking in the absence and presence of catalysts for a gas stream with different levels of methane concentration should be performed. The results of these experiments prove conversion of methane in the absence of catalysts at 900 °C is negligible, whereas in the presence of biochar and activated biochar, significant growth has been observed. Comparing the results of the tests related to using char and activated char shows the enhancement obtained in BET surface area of the catalyst through activation leads to more than 10 vol.% methane conversion.

Keywords: hydrogen production, catalytic methane decomposition, biochar, activated biochar, carbon-based catalyts

Procedia PDF Downloads 81
1059 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry

Authors: Anu S. Nair, V. Anantha Subramanian

Abstract:

This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.

Keywords: computer-aided design, methodical series, NURBS, ship design

Procedia PDF Downloads 169
1058 Geopolymer Concrete: A Review of Properties, Applications and Limitations

Authors: Abbas Ahmed Albu Shaqraa

Abstract:

The concept of a safe environment and low greenhouse gas emissions is a common concern especially in the construction industry. The produced carbon dioxide (CO2) emissions are nearly a ton in producing only one ton of Portland cement, which is the primary ingredient of concrete. Current studies had investigated the utilization of several waste materials in producing a cement free concrete. The geopolymer concrete is a green material that results from the reaction of aluminosilicate material with an alkaline liquid. A summary of several recent researches in geopolymer concrete will be presented in this manuscript. In addition, the offered presented review considers the use of several waste materials including fly ash, granulated blast furnace slag, cement kiln dust, kaolin, metakaolin, and limestone powder as binding materials in making geopolymer concrete. Moreover, the mechanical, chemical and thermal properties of geopolymer concrete will be reviewed. In addition, the geopolymer concrete applications and limitations will be discussed as well. The results showed a high early compressive strength gain in geopolymer concrete when dry- heating or steam curing was performed. Also, it was stated that the outstanding acidic resistance of the geopolymer concrete made it possible to be used where the ordinary Portland cement concrete was doubtable. Thus, the commercial geopolymer concrete pipes were favored for sewer system in case of high acidic conditions. Furthermore, it was reported that the geopolymer concrete could stand up to 1200 °C in fire without losing its strength integrity whereas the Portland cement concrete was losing its function upon heating to some 100s °C only. However, the geopolymer concrete still considered as an emerging field and occupied mainly by the precast industries.

Keywords: geopolymer concrete, Portland cement concrete, alkaline liquid, compressive strength

Procedia PDF Downloads 221
1057 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building

Authors: Yazan Al-Kofahi, Jamal Alqawasmi.

Abstract:

In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.

Keywords: machine learning, deep learning, artificial intelligence, sustainable building

Procedia PDF Downloads 67
1056 Effect of Particle Size and Concentration of Pomegranate (Punica granatum l.) Peel Powder on Suppression of Oxidation of Edible Plant Oils

Authors: D. G. D. C. L. Munasinghe, M. S. Gunawardana, P. H. P. Prasanna, C. S. Ranadheera, T. Madhujith

Abstract:

Lipid oxidation is an important process that affects the shelf life of edible oils. Oxidation produces off flavors, off odors and chemical compounds that lead to adverse health effects. Chemical mechanisms such as autoxidation, photo-oxidation and thermal oxidation are responsible for lipid oxidation. Refined, Bleached and Deodorized (RBD) coconut oil, Virgin Coconut Oil (VCO) and corn oil are widely used plant oils. Pomegranate fruit is known to possess high antioxidative efficacy. Peel of pomegranate contains high antioxidant activity than aril and pulp membrane. The study attempted to study the effect of particle size and concentration of pomegranate peel powder on suppression of oxidation of RBD coconut oil, VCO and corn oil. Pomegranate peel powder was incorporated into each oil sample as micro (< 250 µm) and nano particles (280 - 300 nm) at 100 ppm and 200 ppm concentrations. The control sample of each oil was prepared, devoid of pomegranate peel powder. The stability of oils against autoxidation was evaluated by storing oil samples at 60 °C for 28 days. The level of oxidation was assessed by peroxide value and thiobarbituric acid reactive substances on 0,1,3,5,7,14 and 28 day, respectively. VCO containing pomegranate particles of 280 - 300 nm at 200 ppm showed the highest oxidative stability followed by RBD coconut oil and corn oil. Results revealed that pomegranate peel powder with 280 - 300 nm particle size at 200 ppm concentration was the best in mitigating oxidation of RBD coconut oil, VCO and corn oil. There is a huge potential of utilizing pomegranate peel powder as an antioxidant agent in reducing oxidation of edible plant oils.

Keywords: antioxidant, autoxidation, micro particles, nano particles, pomegranate peel powder

Procedia PDF Downloads 453
1055 Feasibilities for Recovering of Precious Metals from Printed Circuit Board Waste

Authors: Simona Ziukaite, Remigijus Ivanauskas, Gintaras Denafas

Abstract:

Market development of electrical and electronic equipment and a short life cycle is driven by the increasing waste streams. Gold Au, copper Cu, silver Ag and palladium Pd can be found on printed circuit board. These metals make up the largest value of printed circuit board. Therefore, the printed circuit boards scrap is valuable as potential raw material for precious metals recovery. A comparison of Cu, Au, Ag, Pd recovery from waste printed circuit techniques was selected metals leaching of chemical reagents. The study was conducted using the selected multistage technique for Au, Cu, Ag, Pd recovery of printed circuit board. In the first and second metals leaching stages, as the elution reagent, 2M H2SO4 and H2O2 (35%) was used. In the third stage, leaching of precious metals used solution of 20 g/l of thiourea and 6 g/l of Fe2 (SO4)3. Verify the efficiency of the method was carried out the metals leaching test with aqua regia. Based on the experimental study, the leaching efficiency, using the preferred methodology, 60 % of Au and 85,5 % of Cu dissolution was achieved. Metals leaching efficiency after waste mechanical crushing and thermal treatment have been increased by 1,7 times (40 %) for copper, 1,6 times (37 %) for gold and 1,8 times (44 %) for silver. It was noticed that, the Au amount in old (> 20 years) waste is 17 times more, Cu amount - 4 times more, and Ag - 2 times more than in the new (< 1 years) waste. Palladium in the new printed circuit board waste has not been found, however, it was established that from 1 t of old printed circuit board waste can be recovered 1,064 g of Pd (leaching with aqua regia). It was found that from 1 t of old printed circuit board waste can be recovered 1,064 g of Ag. Precious metals recovery in Lithuania was estimated in this study. Given the amounts of generated printed circuit board waste, the limits for recovery of precious metals were identified.

Keywords: leaching efficiency, limits for recovery, precious metals recovery, printed circuit board waste

Procedia PDF Downloads 391
1054 A Photoemission Study of Dye Molecules Deposited by Electrospray on rutile TiO2 (110)

Authors: Nouf Alharbi, James O'shea

Abstract:

For decades, renewable energy sources have received considerable global interest due to the increase in fossil fuel consumption. The abundant energy produced by sunlight makes dye-sensitised solar cells (DSSCs) a promising alternative compared to conventional silicon and thin film solar cells due to their transparency and tunable colours, which make them suitable for applications such as windows and glass facades. The transfer of an excited electron onto the surface is an important procedure in the DSSC system, so different groups of dye molecules were studied on the rutile TiO2 (110) surface. Currently, the study of organic dyes has become an interest of researchers due to ruthenium being a rare and expensive metal, and metal-free organic dyes have many features, such as high molar extinction coefficients, low manufacturing costs, and ease of structural modification and synthesis. There are, of course, some groups that have developed organic dyes and exhibited lower light-harvesting efficiency ranging between 4% and 8%. Since most dye molecules are complicated or fragile to be deposited by thermal evaporation or sublimation in the ultra-high vacuum (UHV), all dyes (i.e, D5, SC4, and R6) in this study were deposited in situ using the electrospray deposition technique combined with X-ray photoelectron spectroscopy (XPS) as an alternative method to obtain high-quality monolayers of titanium dioxide. These organic molecules adsorbed onto rutile TiO2 (110) are explored by XPS, which can be used to obtain element-specific information on the chemical structure and study bonding and interaction sites on the surface.

Keywords: dyes, deposition, electrospray, molecules, organic, rutile, sensitised, XPS

Procedia PDF Downloads 74