Search results for: exergetic efficiency
513 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach
Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis
Abstract:
The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company
Procedia PDF Downloads 113512 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies
Authors: Praniil Nagaraj
Abstract:
This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis
Procedia PDF Downloads 44511 A Preliminary in vitro Investigation of the Acetylcholinesterase and α-Amylase Inhibition Potential of Pomegranate Peel Extracts
Authors: Zoi Konsoula
Abstract:
The increasing prevalence of Alzheimer’s disease (AD) and diabetes mellitus (DM) constitutes them major global health problems. Recently, the inhibition of key enzyme activity is considered a potential treatment of both diseases. Specifically, inhibition of acetylcholinesterase (AChE), the key enzyme involved in the breakdown of the neurotransmitter acetylcholine, is a promising approach for the treatment of AD, while inhibition of α-amylase retards the hydrolysis of carbohydrates and, thus, reduces hyperglycemia. Unfortunately, commercially available AChE and α-amylase inhibitors are reported to possess side effects. Consequently, there is a need to develop safe and effective treatments for both diseases. In the present study, pomegranate peel (PP) was extracted using various solvents of increasing polarity, while two extraction methods were employed, the conventional maceration and the ultrasound assisted extraction (UAE). The concentration of bioactive phytoconstituents, such as total phenolics (TPC) and total flavonoids (TFC) in the prepared extracts was evaluated by the Folin-Ciocalteu and the aluminum-flavonoid complex method, respectively. Furthermore, the anti-neurodegenerative and anti-hyperglycemic activity of all extracts was determined using AChE and α-amylase inhibitory activity assays, respectively. The inhibitory activity of the extracts against AChE and α-amylase was characterized by estimating their IC₅₀ value using a dose-response curve, while galanthamine and acarbose were used as positive controls, respectively. Finally, the kinetics of AChE and α-amylase in the presence of the most inhibitory potent extracts was determined by the Lineweaver-Burk plot. The methanolic extract prepared using the UAE contained the highest amount of phytoconstituents, followed by the respective ethanolic extract. All extracts inhibited acetylcholinesterase in a dose-dependent manner, while the increased anticholinesterase activity of the methanolic (IC₅₀ = 32 μg/mL) and ethanolic (IC₅₀ = 42 μg/mL) extract was positively correlated with their TPC content. Furthermore, the activity of the aforementioned extracts was comparable to galanthamine. Similar results were obtained in the case of α-amylase, however, all extracts showed lower inhibitory effect on the carbohydrate hydrolyzing enzyme than on AChE, since the IC₅₀ value ranged from 84 to 100 μg/mL. Also, the α-amylase inhibitory effect of the extracts was lower than acarbose. Finally, the methanolic and ethanolic extracts prepared by UAE inhibited both enzymes in a mixed (competitive/noncompetitive) manner since the Kₘ value of both enzymes increased in the presence of extracts, while the Vmax value decreased. The results of the present study indicate that PP may be a useful source of active compounds for the management of AD and DM. Moreover, taking into consideration that PP is an agro-industrial waste product, its valorization could not only result in economic efficiency but also reduce the environmental pollution.Keywords: acetylcholinesterase, Alzheimer’s disease, α-amylase, diabetes mellitus, pomegranate
Procedia PDF Downloads 122510 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs
Authors: Raj Kumar, Prem Felix Siril
Abstract:
The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability
Procedia PDF Downloads 312509 The International Fight against the Financing of Terrorism: Analysis of the Anti-Money Laundering and Combating Financing of Terrorism Regime
Authors: Loukou Amoin Marie Djedri
Abstract:
Financing is important for all terrorists – from the largest organizations in control of territories, to the smallest groups – not only for spreading fear through attacks, but also to finance the expansion of terrorist dogmas. These organizations pose serious threats to the international community. The disruption of terrorist financing aims to create a hostile environment for the growth of terrorism and to limit considerably the terrorist groups capacities. The World Bank (WB), together with the International Monetary Fund (IMF), decided to include in their scope the Fight against the money laundering and the financing of terrorism, in order to assist Member States in protecting their internal financial system from terrorism use and abuse and reinforcing their legal system. To do so, they have adopted the Anti-Money Laundering /Combating Financing of Terrorism (AML/CFT) standards that have been set up by the Financial Action Task Force. This set of standards, recognized as the international standards for anti-money laundering and combating the financing of terrorism, has to be implemented by States Members in order to strengthen their judicial system and relevant national institutions. However, we noted that, to date, some States Members still have significant AML/CFT deficiencies, which can constitute serious threats not only to the country’s economic stability but also for the global financial system. In addition, studies stressed out that repressive measures are more implemented by countries than preventive measures, which could be an important weakness in a state security system. Furthermore, we noticed that the AML/CFT standards evolve slowly, while techniques used by terrorist networks keep developing. The goal of the study is to show how to enhance the AML/CFT global compliance through the work of the IMF and the WB, to help member states to consolidate their financial system. To encourage and ensure the effectiveness of these standards, a methodology for assessing the compliance with the AML/CFT standards has been created to follow up the concrete implementation of these standards and to provide accurate technical assistance to countries in need. A risk-based approach has also been adopted as a key component of the implementation of the AML/CFT Standards, with the aim of strengthening the efficiency of the standards. Instead, we noted that the assessment is not efficient in the process of enhancing AML/CFT measures because it seems to lack of adaptation to the country situation. In other words, internal and external factors are not enough taken into account in a country assessment program. The purpose of this paper is to analyze the AML/CFT regime in the fight against the financing of terrorism and to find lasting solutions to achieve the global AML/CFT compliance. The work of all the organizations involved in this combat is imperative to protect the financial network and to lead to the disintegration of terrorist groups in the future.Keywords: AML/CFT standards, financing of terrorism, international financial institutions, risk-based approach
Procedia PDF Downloads 275508 Carbon Capture and Storage Using Porous-Based Aerogel Materials
Authors: Rima Alfaraj, Abeer Alarawi, Murtadha AlTammar
Abstract:
The global energy landscape heavily relies on the oil and gas industry, which faces the critical challenge of reducing its carbon footprint. To address this issue, the integration of advanced materials like aerogels has emerged as a promising solution to enhance sustainability and environmental performance within the industry. This study thoroughly examines the application of aerogel-based technologies in the oil and gas sector, focusing particularly on their role in carbon capture and storage (CCS) initiatives. Aerogels, known for their exceptional properties, such as high surface area, low density, and customizable pore structure, have garnered attention for their potential in various CCS strategies. The review delves into various fabrication techniques utilized in producing aerogel materials, including sol-gel, supercritical drying, and freeze-drying methods, to assess their suitability for specific industry applications. Beyond fabrication, the practicality of aerogel materials in critical areas such as flow assurance, enhanced oil recovery, and thermal insulation is explored. The analysis spans a wide range of applications, from potential use in pipelines and equipment to subsea installations, offering valuable insights into the real-world implementation of aerogels in the oil and gas sector. The paper also investigates the adsorption and storage capabilities of aerogel-based sorbents, showcasing their effectiveness in capturing and storing carbon dioxide (CO₂) molecules. Optimization of pore size distribution and surface chemistry is examined to enhance the affinity and selectivity of aerogels towards CO₂, thereby improving the efficiency and capacity of CCS systems. Additionally, the study explores the potential of aerogel-based membranes for separating and purifying CO₂ from oil and gas streams, emphasizing their role in the carbon capture and utilization (CCU) value chain in the industry. Emerging trends and future perspectives in integrating aerogel-based technologies within the oil and gas sector are also discussed, including the development of hybrid aerogel composites and advanced functional components to further enhance material performance and versatility. By synthesizing the latest advancements and future directions in aerogel used for CCS applications in the oil and gas industry, this review offers a comprehensive understanding of how these innovative materials can aid in transitioning towards a more sustainable and environmentally conscious energy landscape. The insights provided can assist in strategic decision-making, drive technology development, and foster collaborations among academia, industry, and policymakers to promote the widespread adoption of aerogel-based solutions in the oil and gas sector.Keywords: CCS, porous, carbon capture, oil and gas, sustainability
Procedia PDF Downloads 41507 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: Woranittha Kritsarikan
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 177506 Cytotoxicological Evaluation of a Folate Receptor Targeting Drug Delivery System Based on Cyclodextrins
Authors: Caroline Mendes, Mary McNamara, Orla Howe
Abstract:
For chemotherapy, a drug delivery system should be able to specifically target cancer cells and deliver the therapeutic dose without affecting normal cells. Folate receptors (FR) can be considered key targets since they are commonly over-expressed in cancer cells and they are the molecular marker used in this study. Here, cyclodextrin (CD) has being studied as a vehicle for delivering the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within the CD cavity. In this study, β-CD has been modified using folic acid so as to specifically target the FR molecular marker. Thus, the system studied here for drug delivery consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 9.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 10.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 8.5 for A549 and 132.6 µM ± 12.1 and 288.1 µM ± 16.3 for BEAS-2B. These results demonstrate that MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug.Keywords: cyclodextrins, cancer treatment, drug delivery, folate receptors, reduced folate carriers
Procedia PDF Downloads 301505 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery
Procedia PDF Downloads 172504 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature
Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi
Abstract:
The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.Keywords: hardness, powder metallurgy, spark plasma sintering, wear
Procedia PDF Downloads 273503 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables
Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck
Abstract:
The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.Keywords: buildings as material banks, building stock, estimation method, interior wall area
Procedia PDF Downloads 30502 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars
Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea
Abstract:
Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag
Procedia PDF Downloads 325501 Comparison between Conventional Bacterial and Algal-Bacterial Aerobic Granular Sludge Systems in the Treatment of Saline Wastewater
Authors: Philip Semaha, Zhongfang Lei, Ziwen Zhao, Sen Liu, Zhenya Zhang, Kazuya Shimizu
Abstract:
The increasing generation of saline wastewater through various industrial activities is becoming a global concern for activated sludge (AS) based biological treatment which is widely applied in wastewater treatment plants (WWTPs). As for the AS process, an increase in wastewater salinity has negative impact on its overall performance. The advent of conventional aerobic granular sludge (AGS) or bacterial AGS biotechnology has gained much attention because of its superior performance. The development of algal-bacterial AGS could enhance better nutrients removal, potentially reduce aeration cost through symbiotic algae-bacterial activity, and thus, can also reduce overall treatment cost. Nonetheless, the potential of salt stress to decrease biomass growth, microbial activity and nutrient removal exist. Up to the present, little information is available on saline wastewater treatment by algal-bacterial AGS. To the authors’ best knowledge, a comparison of the two AGS systems has not been done to evaluate nutrients removal capacity in the context of salinity increase. This study sought to figure out the impact of salinity on the algal-bacterial AGS system in comparison to bacterial AGS one, contributing to the application of AGS technology in the real world of saline wastewater treatment. In this study, the salt concentrations tested were 0 g/L, 1 g/L, 5 g/L, 10 g/L and 15 g/L of NaCl with 24-hr artificial illuminance of approximately 97.2 µmol m¯²s¯¹, and mature bacterial and algal-bacterial AGS were used for the operation of two identical sequencing batch reactors (SBRs) with a working volume of 0.9 L each, respectively. The results showed that salinity increase caused no apparent change in the color of bacterial AGS; while for algal-bacterial AGS, its color was progressively changed from green to dark green. A consequent increase in granule diameter and fluffiness was observed in the bacterial AGS reactor with the increase of salinity in comparison to a decrease in algal-bacterial AGS diameter. However, nitrite accumulation peaked from 1.0 mg/L and 0.4 mg/L at 1 g/L NaCl in the bacterial and algal-bacterial AGS systems, respectively to 9.8 mg/L in both systems when NaCl concentration varied from 5 g/L to 15 g/L. Almost no ammonia nitrogen was detected in the effluent except at 10 g/L NaCl concentration, where it averaged 4.2 mg/L and 2.4 mg/L, respectively, in the bacterial and algal-bacterial AGS systems. Nutrients removal in the algal-bacterial system was relatively higher than the bacterial AGS in terms of nitrogen and phosphorus removals. Nonetheless, the nutrient removal rate was almost 50% or lower. Results show that algal-bacterial AGS is more adaptable to salinity increase and could be more suitable for saline wastewater treatment. Optimization of operation conditions for algal-bacterial AGS system would be important to ensure its stably high efficiency in practice.Keywords: algal-bacterial aerobic granular sludge, bacterial aerobic granular sludge, Nutrients removal, saline wastewater, sequencing batch reactor
Procedia PDF Downloads 148500 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 95499 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model
Authors: A. Shakoor, M. Arshad
Abstract:
The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.Keywords: groundwater quality, groundwater management, PMWIN, MT3D model
Procedia PDF Downloads 378498 Strategies for Arctic Greenhouse Farming: An Energy and Technology Survey of Greenhouse Farming in the North of Sweden
Authors: William Sigvardsson, Christoffer Alenius, Jenny Lindblom, Andreas Johansson, Marcus Sandberg
Abstract:
This article covers a study focusing on a subarctic greenhouse located in Nikkala, Sweden. Through a visit and the creation of a CFD model, the study investigates the differences in energy demand with high pressure sodium (HPS) lights and light emitting diode (LED) lights in combination with an air-carried and water-carried heating system accordingly. Through an IDA ICE model, the impact of insulating the parts of the greenhouse without active cultivation was also investigated. This, with the purpose of comparing the current system in the greenhouse to state-of-the-art alternatives and evaluating if an investment in either a water-carried heating system in combination with LED lights and insulating the non-cultivating parts of the greenhouse could be considered profitable. Operating a greenhouse in the harsh subarctic climate found in the northern parts of Sweden is not an easy task and especially if the operation is year-round. With an average temperature of under -5 °C from November through January, efficient growing techniques are a must to ensure a profitable business. Today the most crucial parts of a greenhouse are the heating system, lighting system, dehumidifying measures, as well as thermal screen, and the impact of a poorly designed system in a sub-arctic could be devastating as the margins are slim. The greenhouse studied uses a pellet burner to power their air- carried heating system which is used. The simulations found the resulting savings amounted to just under 14 800 SEK monthly or 18 % of the total cost of energy by implementing the water-carrying heating system in combination with the LED lamps. Given this, a payback period of 3-9 years could be expected given different scenarios, including specific time periods, financial aids, and the resale price of the current system. The insulation of the non-cultivating parts of the greenhouse was found to have possible savings of 25 300 SEK annually or 46 % of the current heat demand resulting in a payback period of just over 1-2 years. Given the possible energy savings, a reduction in emitted CO2 equivalents of almost 1,9 tonnes could be achieved annually. It was concluded that relatively inexpensive investments in modern greenhouse equipment could make a significant contribution to reducing the energy consumption of the greenhouse resulting in a more competitive business environment for sub-arctic greenhouse owners. New parts of the greenhouse should be built with the water-carried heating system in combination with state-of-the-art LED lights, and all parts which are not housing active cultivation should be insulated. If the greenhouse in Nikkala is eligible for financial aid or finds a resale value in the current system, an investment should be made in a new water-carried heating system in combination with LED lights.Keywords: energy efficiency, sub-arctic greenhouses, energy measures, greenhouse climate control, greenhouse technology, CFD
Procedia PDF Downloads 75497 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73496 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 117495 Effect of Reminiscence Therapy on the Sleep Quality of the Elderly Living in Nursing Homes
Authors: Güler Duru Aşiret
Abstract:
Introduction: Poor sleep quality is a common problem among the older people living in nursing homes. Our study aimed at assessing the effect of individual reminiscence therapy on the sleep quality of the elderly living in nursing homes. Methods: The study had 22 people in the intervention group and 24 people in the control group. The intervention group had reminiscence therapy once a week for 12 weeks in the form of individual sessions of 25-30 minutes. In our study, we first determined the dates suitable for the intervention group and researcher and planned the date and time of individual reminiscence therapies, which would take 12 weeks. While preparing this schedule, we considered subjects’ time schedules for their regular visits to health facilities and the arrival of their visitors. At this stage, the researcher informed the participants that their regular attendance in sessions would affect the intervention outcome. One topic was discussed every week. Weekly topics included: introduction in the first week; childhood and family life, school days, starting work and work life (a day at home for housewives), a fun day out of home, marriage (friendship for the singles), plants and animals they loved, babies and children, food and cooking, holidays and travelling, special days and celebrations, assessment and closure, in the following weeks respectively. The control group had no intervention. Study data was collected by using an introductory information form and the Pittsburgh Sleep Quality Index (PSQI). Results: In our study, participants’ average age was 76.02 ± 7.31. 58.7% of them were male and 84.8% were single. All of them had at least one chronic disease. 76.1% did not need help for performing their daily life activities. The length of stay in the institution was 6.32 ± 3.85 years. According to the participants’ descriptive characteristics, there was no difference between groups. While there was no statistically significant difference between the pretest PSQI median scores (p > 0.05) of both groups, PSQI median score had a statistically significant decrease after 12 weeks of reminiscence therapy (p < 0.05). There was no statistically significant change in the median scores of the subcomponents of sleep latency, sleep duration, sleep efficiency, sleep disturbance and use of sleep medication before and after reminiscence therapy. After the 12-weeks reminiscence therapy, there was a statistically significant change in the median scores for the PSQI subcomponents of subjective sleep quality (p<0.05). Conclusion: Our study found that reminiscence therapy increased the sleep quality of the elderly living in nursing homes. Acknowledgment: This study (project no 2017-037) was supported by the Scientific Research Projects Coordination Unit of Aksaray University. We thank the elderly subjects for their kind participation.Keywords: nursing, older people, reminiscence therapy, sleep
Procedia PDF Downloads 129494 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 268493 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 61492 Disclosure on Adherence of the King Code's Audit Committee Guidance: Cluster Analyses to Determine Strengths and Weaknesses
Authors: Philna Coetzee, Clara Msiza
Abstract:
In modern society, audit committees are seen as the custodians of accountability and the conscience of management and the board. But who holds the audit committee accountable for their actions or non-actions and how do we know what they are supposed to be doing and what they are doing? The purpose of this article is to provide greater insight into the latter part of this problem, namely, determine what best practises for audit committees and the disclosure of what is the realities are. In countries where governance is well established, the roles and responsibilities of the audit committee are mostly clearly guided by legislation and/or guidance documents, with countries increasingly providing guidance on this topic. With high cost involved to adhere to governance guidelines, the public (for public organisations) and shareholders (for private organisations) expect to see the value of their ‘investment’. For audit committees, the dividends on the investment should reflect in less fraudulent activities, less corruption, higher efficiency and effectiveness, improved social and environmental impact, and increased profits, to name a few. If this is not the case (which is reflected in the number of fraudulent activities in both the private and the public sector), stakeholders have the right to ask: where was the audit committee? Therefore, the objective of this article is to contribute to the body of knowledge by comparing the adherence of audit committee to best practices guidelines as stipulated in the King Report across public listed companies, national and provincial government departments, state-owned enterprises and local municipalities. After constructs were formed, based on the literature, factor analyses were conducted to reduce the number of variables in each construct. Thereafter, cluster analyses, which is an explorative analysis technique that classifies a set of objects in such a way that objects that are more similar are grouped into the same group, were conducted. The SPSS TwoStep Clustering Component was used, being capable of handling both continuous and categorical variables. In the first step, a pre-clustering procedure clusters the objects into small sub-clusters, after which it clusters these sub-clusters into the desired number of clusters. The cluster analyses were conducted for each construct and the measure, namely the audit opinion as listed in the external audit report, were included. Analysing 228 organisations' information, the results indicate that there is a clear distinction between the four spheres of business that has been included in the analyses, indicating certain strengths and certain weaknesses within each sphere. The results may provide the overseers of audit committees’ insight into where a specific sector’s strengths and weaknesses lie. Audit committee chairs will be able to improve the areas where their audit committee is lacking behind. The strengthening of audit committees should result in an improvement of the accountability of boards, leading to less fraud and corruption.Keywords: audit committee disclosure, cluster analyses, governance best practices, strengths and weaknesses
Procedia PDF Downloads 167491 Inertial Particle Focusing Dynamics in Trapezoid Straight Microchannels: Application to Continuous Particle Filtration
Authors: Reza Moloudi, Steve Oh, Charles Chun Yang, Majid Ebrahimi Warkiani, May Win Naing
Abstract:
Inertial microfluidics has emerged recently as a promising tool for high-throughput manipulation of particles and cells for a wide range of flow cytometric tasks including cell separation/filtration, cell counting, and mechanical phenotyping. Inertial focusing is profoundly reliant on the cross-sectional shape of the channel and its impacts not only on the shear field but also the wall-effect lift force near the wall region. Despite comprehensive experiments and numerical analysis of the lift forces for rectangular and non-rectangular microchannels (half-circular and triangular cross-section), which all possess planes of symmetry, less effort has been made on the 'flow field structure' of trapezoidal straight microchannels and its effects on inertial focusing. On the other hand, a rectilinear channel with trapezoidal cross-sections breaks down all planes of symmetry. In this study, particle focusing dynamics inside trapezoid straight microchannels was first studied systematically for a broad range of channel Re number (20 < Re < 800). The altered axial velocity profile and consequently new shear force arrangement led to a cross-laterally movement of equilibration toward the longer side wall when the rectangular straight channel was changed to a trapezoid; however, the main lateral focusing started to move backward toward the middle and the shorter side wall, depending on particle clogging ratio (K=a/Hmin, a is particle size), channel aspect ratio (AR=W/Hmin, W is channel width, and Hmin is smaller channel height), and slope of slanted wall, as the channel Reynolds number further increased (Re > 50). Increasing the channel aspect ratio (AR) from 2 to 4 and the slope of slanted wall up to Tan(α)≈0.4 (Tan(α)=(Hlonger-sidewall-Hshorter-sidewall)/W) enhanced the off-center lateral focusing position from the middle of channel cross-section, up to ~20 percent of the channel width. It was found that the focusing point was spoiled near the slanted wall due to the dissymmetry; it mainly focused near the bottom wall or fluctuated between the channel center and the bottom wall, depending on the slanted wall and Re (Re < 100, channel aspect ratio 4:1). Eventually, as a proof of principle, a trapezoidal straight microchannel along with a bifurcation was designed and utilized for continuous filtration of a broader range of particle clogging ratio (0.3 < K < 1) exiting through the longer wall outlet with ~99% efficiency (Re < 100) in comparison to the rectangular straight microchannels (W > H, 0.3 ≤ K < 0.5).Keywords: cell/particle sorting, filtration, inertial microfluidics, straight microchannel, trapezoid
Procedia PDF Downloads 224490 Lightweight Sheet Molding Compound Composites by Coating Glass Fiber with Cellulose Nanocrystals
Authors: Amir Asadi, Karim Habib, Robert J. Moon, Kyriaki Kalaitzidou
Abstract:
There has been considerable interest in cellulose nanomaterials (CN) as polymer and polymer composites reinforcement due to their high specific modulus and strength, low density and toxicity, and accessible hydroxyl side groups that can be readily chemically modified. The focus of this study is making lightweight composites for better fuel efficiency and lower CO2 emission in auto industries with no compromise on mechanical performance using a scalable technique that can be easily integrated in sheet molding compound (SMC) manufacturing lines. Light weighting will be achieved by replacing part of the heavier components, i.e. glass fibers (GF), with a small amount of cellulose nanocrystals (CNC) in short GF/epoxy composites made using SMC. CNC will be introduced as coating of the GF rovings prior to their use in the SMC line. The employed coating method is similar to the fiber sizing technique commonly used and thus it can be easily scaled and integrated to industrial SMC lines. This will be an alternative route to the most techniques that involve dispersing CN in polymer matrix, in which the nanomaterials agglomeration limits the capability for scaling up in an industrial production. We have demonstrated that incorporating CNC as a coating on GF surface by immersing the GF in CNC aqueous suspensions, a simple and scalable technique, increases the interfacial shear strength (IFSS) by ~69% compared to the composites produced by uncoated GF, suggesting an enhancement of stress transfer across the GF/matrix interface. As a result of IFSS enhancement, incorporation of 0.17 wt% CNC in the composite results in increases of ~10% in both elastic modulus and tensile strength, and 40 % and 43 % in flexural modulus and strength respectively. We have also determined that dispersing 1.4 and 2 wt% CNC in the epoxy matrix of short GF/epoxy SMC composites by sonication allows removing 10 wt% GF with no penalty on tensile and flexural properties leading to 7.5% lighter composites. Although sonication is a scalable technique, it is not quite as simple and inexpensive as coating the GF by passing through an aqueous suspension of CNC. In this study, the above findings are integrated to 1) investigate the effect of CNC content on mechanical properties by passing the GF rovings through CNC aqueous suspension with various concentrations (0-5%) and 2) determine the optimum ratio of the added CNC to the removed GF to achieve the maximum possible weight reduction with no cost on mechanical performance of the SMC composites. The results of this study are of industrial relevance, providing a path toward producing high volume lightweight and mechanically enhanced SMC composites using cellulose nanomaterials.Keywords: cellulose nanocrystals, light weight polymer-matrix composites, mechanical properties, sheet molding compound (SMC)
Procedia PDF Downloads 225489 Preparation and Chemical Characterization of Eco-Friendly Activated Carbon Produced from Apricot Stones
Authors: Sabolč Pap, Srđana Kolaković, Jelena Radonić, Ivana Mihajlović, Dragan Adamović, Mirjana Vojinović Miloradov, Maja Turk Sekulić
Abstract:
Activated carbon is one of the most used and tested adsorbents in the removal of industrial organic compounds, heavy metals, pharmaceuticals and dyes. Different types of lignocellulosic materials were used as potential precursors in the production of low cost activated carbon. There are, two different processes for the preparation and production of activated carbon: physical and chemical. Chemical activation includes impregnating the lignocellulosic raw materials with chemical agents (H3PO4, HNO3, H2SO4 and NaOH). After impregnation, the materials are carbonized and washed to eliminate the residues. The chemical activation, which was used in this study, has two important advantages when compared to the physical activation. The first advantage is the lower temperature at which the process is conducted, and the second is that the yield (mass efficiency of activation) of the chemical activation tends to be greater. Preparation of activated carbon included the following steps: apricot stones were crushed in a mill and washed with distilled water. Later, the fruit stones were impregnated with a solution of 50% H3PO4. After impregnation, the solution was filtered to remove the residual acid. Subsequently impregnated samples were air dried at room temperature. The samples were placed in a furnace and heated (10 °C/min) to the final carbonization temperature of 500 °C for 2 h without the use of nitrogen. After cooling, the adsorbent was washed with distilled water to achieve acid free conditions and its pH was monitored until the filtrate pH value exceeded 4. Chemical characterizations of the prepared activated carbon were analyzed by FTIR spectroscopy. FTIR spectra were recorded with a (Thermo Nicolet Nexus 670 FTIR) spectrometer, from 400 to 4000 cm-1 wavenumbers, identifying the functional groups on the surface of the activated carbon. The FTIR spectra of adsorbent showed a broad band at 3405.91 cm-1 due to O–H stretching vibration and a peak at 489.00 cm-1 due to O–H bending vibration. Peaks between the range of 3700 and 3200 cm−1 represent the overlapping peaks of stretching vibrations of O–H and N–H groups. The distinct absorption peaks at 2919.86 cm−1 and 2848.24 cm−1 could be assigned to -CH stretching vibrations of –CH2 and –CH3 functional groups. The adsorption peak at 1566.38 cm−1 could be characterized by primary and secondary amide bands. The sharp bond within 1164.76 – 987.86 cm−1 is attributed to the C–O groups, which confirms the lignin structure of the activated carbon. The present study has shown that the activated carbons prepared from apricot stone have a functional group on their surface, which can positively affect the adsorption characteristics with this material.Keywords: activated carbon, FTIR, H3PO4, lignocellulosic raw materials
Procedia PDF Downloads 249488 Ultrasound Assisted Alkaline Potassium Permanganate Pre-Treatment of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Lignocellulose is the largest reservoir of inexpensive, renewable source of carbon. It is composed of lignin, cellulose and hemicellulose. Cellulose and hemicellulose is composed of reducing sugars glucose, xylose and several other monosaccharides which can be metabolised by microorganisms to produce several value added products such as biofuels, enzymes, aminoacids etc. Enzymatic treatment of lignocellulose leads to the release of monosaccharides such as glucose and xylose. However, factors such as the presence of lignin, crystalline cellulose, acetyl groups, pectin etc. contributes to recalcitrance restricting the effective enzymatic hydrolysis of cellulose and hemicellulose. In order to overcome these problems, pre-treatment of lignocellulose is generally carried out which essentially facilitate better degradation of lignocellulose. A range of pre-treatment strategy is commonly employed based on its mode of action viz. physical, chemical, biological and physico-chemical. However, existing pretreatment strategies result in lower sugar yield and formation of inhibitory compounds. In order to overcome these problems, we proposes a novel pre-treatment, which utilises the superior oxidising capacity of alkaline potassium permanganate assisted by ultra-sonication to break the covalent bonds in spent coffee waste to remove recalcitrant compounds such as lignin. The pre-treatment was conducted for 30 minutes using 2% (w/v) potassium permanganate at room temperature with solid to liquid ratio of 1:10. The pre-treated spent coffee waste (SCW) was subjected to enzymatic hydrolysis using enzymes cellulase and hemicellulase. Shake flask experiments were conducted with a working volume of 50mL buffer containing 1% substrate. The results showed that the novel pre-treatment strategy yielded 7 g/L of reducing sugar as compared to 3.71 g/L obtained from biomass that had undergone dilute acid hydrolysis after 24 hours. From the results obtained it is fairly certain that ultrasonication assists the oxidation of recalcitrant components in lignocellulose by potassium permanganate. Enzyme hydrolysis studies suggest that ultrasound assisted alkaline potassium permanganate pre-treatment is far superior over treatment by dilute acid. Furthermore, SEM, XRD and FTIR were carried out to analyse the effect of the new pre-treatment strategy on structure and crystallinity of pre-treated spent coffee wastes. This novel one-step pre-treatment strategy was implemented under mild conditions and exhibited high efficiency in the enzymatic hydrolysis of spent coffee waste. Further study and scale up is in progress in order to realise future industrial applications.Keywords: spent coffee waste, alkaline potassium permanganate, ultra-sonication, physical characterisation
Procedia PDF Downloads 357487 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites
Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan
Abstract:
All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite
Procedia PDF Downloads 100486 Co-pyrolysis of Sludge and Kaolin/Zeolite to Stabilize Heavy Metals
Authors: Qian Li, Zhaoping Zhong
Abstract:
Sewage sludge, a typical solid waste, has inevitably been produced in enormous quantities in China. Still worse, the amount of sewage sludge produced has been increasing due to rapid economic development and urbanization. Compared to the conventional method to treat sewage sludge, pyrolysis has been considered an economic and ecological technology because it can significantly reduce the sludge volume, completely kill pathogens, and produce valuable solid, gas, and liquid products. However, the large-scale utilization of sludge biochar has been limited due to the considerable risk posed by heavy metals in the sludge. Heavy metals enriched in pyrolytic biochar could be divided into exchangeable, reducible, oxidizable, and residual forms. The residual form of heavy metals is the most stable and cannot be used by organisms. Kaolin and zeolite are environmentally friendly inorganic minerals with a high surface area and heat resistance characteristics. So, they exhibit the enormous potential to immobilize heavy metals. In order to reduce the risk of leaching heavy metals in the pyrolysis biochar, this study pyrolyzed sewage sludge mixed with kaolin/zeolite in a small rotary kiln. The influences of additives and pyrolysis temperature on the leaching concentration and morphological transformation of heavy metals in pyrolysis biochar were investigated. The potential mechanism of stabilizing heavy metals in the co-pyrolysis of sludge blended with kaolin/zeolite was explained by scanning electron microscopy, X-ray diffraction, and specific surface area and porosity analysis. The European Community Bureau of Reference sequential extraction procedure has been applied to analyze the forms of heavy metals in sludge and pyrolysis biochar. All the concentrations of heavy metals were examined by flame atomic absorption spectrophotometry. Compared with the proportions of heavy metals associated with the F4 fraction in pyrolytic carbon prepared without additional agents, those in carbon obtained by co-pyrolysis of sludge and kaolin/zeolite increased. Increasing the additive dosage could improve the proportions of the stable fraction of various heavy metals in biochar. Kaolin exhibited a better effect on stabilizing heavy metals than zeolite. Aluminosilicate additives with excellent adsorption performance could capture more released heavy metals during sludge pyrolysis. Then heavy metal ions would react with the oxygen ions of additives to form silicate and aluminate, causing the conversion of heavy metals from unstable fractions (sulfate, chloride, etc.) to stable fractions (silicate, aluminate, etc.). This study reveals that the efficiency of stabilizing heavy metals depends on the formation of stable mineral compounds containing heavy metals in pyrolysis biochar.Keywords: co-pyrolysis, heavy metals, immobilization mechanism, sewage sludge
Procedia PDF Downloads 66485 Ballistic Performance of Magnesia Panels and Modular Wall Systems
Authors: Khin Thandar Soe, Mark Stephen Pulham
Abstract:
Ballistic building materials play a crucial role in ensuring the safety of the occupants within protective structures. Traditional options like Ordinary Portland Cement (OPC)-based walls, including reinforced concrete walls, precast concrete walls, masonry walls, and concrete blocks, are frequently employed for ballistic protection, but they have several drawbacks such as being thick, heavy, costly, and challenging to construct. On the other hand, glass and composite materials offer lightweight and easier construction alternatives, but they come with a high price tag. There has been no reported test data on magnesium-based ballistic wall panels or modular wall systems so far. This paper presents groundbreaking small arms test data related to the development of the world’s first magnesia cement ballistic wall panels and modular wall system. Non-hydraulic magnesia cement exhibits several superior properties, such as lighter weight, flexibility, acoustics, and fire performance, compared to the traditional Portland Cement. However, magnesia cement is hydrophilic and may degrade in prolonged contact with water. In this research, modified magnesia cement for water resistant and durability from UBIQ Technology is applied. The specimens are made of a modified magnesia cement formula and prepared in the Laboratory of UBIQ Technology Pty Ltd. The specimens vary in thickness, and the tests cover various small arms threats in compliance with standards AS/NZS2343 and UL752 and are performed up to the maximum threat level of Classification R2 (NATO) and UL-Level 8(NATO) by the Accredited Test Centre, BMT (Ballistic and Mechanical Testing, VIC, Australia). In addition, the results of the test conducted on the specimens subjected to the small 12mm diameter steel ball projectile impact generated by a gas gun are also presented and discussed in this paper. Gas gun tests were performed in UNSW@ADFA, Canberra, Australia. The tested results of the magnesia panels and wall systems are compared with one of concrete and other wall panels documented in the literature. The conclusion drawn is that magnesia panels and wall systems exhibit several advantages over traditional OPC-based wall systems, and they include being lighter, thinner, and easier to construct, all while providing equivalent protection against threats. This makes magnesia cement-based materials a compelling choice of application where efficiency and performance are critical to create a protective environment.Keywords: ballistics, small arms, gas gun, projectile, impact, wall panels, modular, magnesia cement
Procedia PDF Downloads 76484 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System
Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal
Abstract:
The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.Keywords: microgravity effect, response surface, terminal speed, unmanned system
Procedia PDF Downloads 173