Search results for: skinfold measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2835

Search results for: skinfold measurements

345 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit

Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier

Abstract:

Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.

Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability

Procedia PDF Downloads 155
344 Multilevel of Factors Affected Optimal Adherence to Antiretroviral Therapy and Viral Suppression amongst HIV-Infected Prisoners in South Ethiopia: A Prospective Cohort Study

Authors: Terefe Fuge, George Tsourtos , Emma Miller

Abstract:

Objectives: Maintaining optimal adherence and viral suppression in people living with HIV (PLWHA) is essential to ensure both preventative and therapeutic benefits of antiretroviral therapy (ART). Prisoners bear a particularly high burden of HIV infection and are highly likely to transmit to others during and after incarceration. However, the level of adherence and viral suppression, as well as its associated factors in incarcerated populations in low-income countries is unknown. This study aimed to determine the prevalence of non-adherence and viral failure, and contributing factors to this amongst prisoners in South Ethiopia. Methods: A prospective cohort study was conducted between June 1, 2019 and July 31, 2020 to compare the level of adherence and viral suppression between incarcerated and non-incarcerated PLWHA. The study involved 74 inmates living with HIV (ILWHA) and 296 non-incarcerated PLWHA. Background information including sociodemographic, socioeconomic, psychosocial, behavioural, and incarceration-related characteristics was collected using a structured questionnaire. Adherence was determined based on participants’ self-report and pharmacy refill records, and plasma viral load measurements which were undertaken within the study period were prospectively extracted to determine viral suppression. Various univariate and multivariate regression models were used to analyse data. Results: Self-reported dose adherence was approximately similar between ILWHA and non-incarcerated PLWHA (81% and 83% respectively), but ILWHA had a significantly higher medication possession ratio (MPR) (89% vs 75%). The prevalence of viral failure (VF) was slightly higher (6%) in ILWHA compared to non-incarcerated PLWHA (4.4%). The overall dose non-adherence (NA) was significantly associated with missing ART appointments, level of satisfaction with ART services, patient’s ability to comply with a specified medication schedule and types of methods used to monitor the schedule. In ILWHA specifically, accessing ART services from a hospital compared to a health centre, an inability to always attend clinic appointments, experience of depression and a lack of social support predicted NA. VF was significantly higher in males, people of age 31-35 years and in those who experienced social stigma, regardless of their incarceration status. Conclusions: This study revealed that HIV-infected prisoners in South Ethiopia were more likely to be non-adherent to doses and so to develop viral failure compared to their non-incarcerated counterparts. A multitude of factors was found to be responsible for this requiring multilevel intervention strategies focusing on the specific needs of prisoners.

Keywords: Adherence , Antiretroviral therapy, Incarceration, South Ethiopia, Viral suppression

Procedia PDF Downloads 135
343 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 89
342 Synthesis and Characterizations of Lead-free BaO-Doped TeZnCaB Glass Systems for Radiation Shielding Applications

Authors: Rezaul K. Sk., Mohammad Ashiq, Avinash K. Srivastava

Abstract:

The use of radiation shielding technology ranging from EMI to high energy gamma rays in various areas such as devices, medical science, defense, nuclear power plants, medical diagnostics etc. is increasing all over the world. However, exposure to different radiations such as X-ray, gamma ray, neutrons and EMI above the permissible limits is harmful to living beings, the environment and sensitive laboratory equipment. In order to solve this problem, there is a need to develop effective radiation shielding materials. Conventionally, lead and lead-based materials are used in making shielding materials, as lead is cheap, dense and provides very effective shielding to radiation. However, the problem associated with the use of lead is its toxic nature and carcinogenic. So, to overcome these drawbacks, there is a great need for lead-free radiation shielding materials and that should also be economically sustainable. Therefore, it is necessary to look for the synthesis of radiation-shielding glass by using other heavy metal oxides (HMO) instead of lead. The lead-free BaO-doped TeZnCaB glass systems have been synthesized by the traditional melt-quenching method. X-ray diffraction analysis confirmed the glassy nature of the synthesized samples. The densities of the developed glass samples were increased by doping the BaO concentration, ranging from 4.292 to 4.725 g/cm3. The vibrational and bending modes of the BaO-doped glass samples were analyzed by Raman spectroscopy, and FTIR (Fourier-transform infrared spectroscopy) was performed to study the functional group present in the samples. UV-visible characterization revealed the significance of optical parameters such as Urbach’s energy, refractive index and optical energy band gap. The indirect and direct energy band gaps were decreased with the BaO concentration whereas the refractive index was increased. X-ray attenuation measurements were performed to determine the radiation shielding parameters such as linear attenuation coefficient (LAC), mass attenuation coefficient (MAC), half value layer (HVL), tenth value layer (TVL), mean free path (MFP), attenuation factor (Att%) and lead equivalent thickness of the lead-free BaO-doped TeZnCaB glass system. It was observed that the radiation shielding characteristics were enhanced with the addition of BaO content in the TeZnCaB glass samples. The glass samples with higher contents of BaO have the best attenuation performance. So, it could be concluded that the addition of BaO into TeZnCaB glass samples is a significant technique to improve the radiation shielding performance of the glass samples. The best lead equivalent thickness was 2.626 mm, and these glasses could be good materials for medical diagnostics applications.

Keywords: heavy metal oxides, lead-free, melt-quenching method, x-ray attenuation

Procedia PDF Downloads 31
341 Gas-Phase Nondestructive and Environmentally Friendly Covalent Functionalization of Graphene Oxide Paper with Amines

Authors: Natalia Alzate-Carvajal, Diego A. Acevedo-Guzman, Victor Meza-Laguna, Mario H. Farias, Luis A. Perez-Rey, Edgar Abarca-Morales, Victor A. Garcia-Ramirez, Vladimir A. Basiuk, Elena V. Basiuk

Abstract:

Direct covalent functionalization of prefabricated free-standing graphene oxide paper (GOP) is considered as the only approach suitable for systematic tuning of thermal, mechanical and electronic characteristics of this important class of carbon nanomaterials. At the same time, the traditional liquid-phase functionalization protocols can compromise physical integrity of the paper-like material up to its total disintegration. To avoid such undesirable effects, we explored the possibility of employing an alternative, solvent-free strategy for facile and nondestructive functionalization of GOP with two representative aliphatic amines, 1-octadecylamine (ODA) and 1,12-diaminododecane (DAD), as well as with two aromatic amines, 1-aminopyrene (AP) and 1,5-diaminonaphthalene (DAN). The functionalization was performed under moderate heating at 150-180 °C in vacuum. Under such conditions, it proceeds through both amidation and epoxy ring opening reactions. Comparative characterization of pristine and amine-functionalized GOP mats was carried out by using Fourier-transform infrared, Raman, and X-ray photoelectron spectroscopy (XPS), thermogravimetric (TGA) and differential thermal analysis, scanning electron and atomic force microscopy (SEM and AFM, respectively). Besides that, we compared the stability in water, wettability, electrical conductivity and elastic (Young's) modulus of GOP mats before and after amine functionalization. The highest content of organic species was obtained in the case of GOP-ODA, followed by GOP-DAD, GOP-AP and GOP-DAN samples. The covalent functionalization increased mechanical and thermal stability of GOP, as well as its electrical conductivity. The magnitude of each effect depends on the particular chemical structure of amine employed, which allows for tuning a given GOP property. Morphological characterization by using SEM showed that, compared to pristine graphene oxide paper, amine-modified GOP mats become relatively ordered layered assemblies, in which individual GO sheets are organized in a near-parallel pattern. Financial support from the National Autonomous University of Mexico (grants DGAPA-IN101118 and IN200516) and from the National Council of Science and Technology of Mexico (CONACYT, grant 250655) is greatly appreciated. The authors also thank David A. Domínguez (CNyN of UNAM) for XPS measurements and Dr. Edgar Alvarez-Zauco (Faculty of Science of UNAM) for the opportunity to use TGA equipment.

Keywords: amines, covalent functionalization, gas-phase, graphene oxide paper

Procedia PDF Downloads 181
340 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 483
339 Influence of Microparticles in the Contact Region of Quartz Sand Grains: A Micro-Mechanical Experimental Study

Authors: Sathwik Sarvadevabhatla Kasyap, Kostas Senetakis

Abstract:

The mechanical behavior of geological materials is very complex, and this complexity is related to the discrete nature of soils and rocks. Characteristics of a material at the grain scale such as particle size and shape, surface roughness and morphology, and particle contact interface are critical to evaluate and better understand the behavior of discrete materials. This study investigates experimentally the micro-mechanical behavior of quartz sand grains with emphasis on the influence of the presence of microparticles in their contact region. The outputs of the study provide some fundamental insights on the contact mechanics behavior of artificially coated grains and can provide useful input parameters in the discrete element modeling (DEM) of soils. In nature, the contact interfaces between real soil grains are commonly observed with microparticles. This is usually the case of sand-silt and sand-clay mixtures, where the finer particles may create a coating on the surface of the coarser grains, altering in this way the micro-, and thus the macro-scale response of geological materials. In this study, the micro-mechanical behavior of Leighton Buzzard Sand (LBS) quartz grains, with interference of different microparticles at their contact interfaces is studied in the laboratory using an advanced custom-built inter-particle loading apparatus. Special techniques were adopted to develop the coating on the surfaces of the quartz sand grains so that to establish repeatability of the coating technique. The characterization of the microstructure of coated particles on their surfaces was based on element composition analyses, microscopic images, surface roughness measurements, and single particle crushing strength tests. The mechanical responses such as normal and tangential load – displacement behavior, tangential stiffness behavior, and normal contact behavior under cyclic loading were studied. The behavior of coated LBS particles is compared among different classes of them and with pure LBS (i.e. surface cleaned to remove any microparticles). The damage on the surface of the particles was analyzed using microscopic images. Extended displacements in both normal and tangential directions were observed for coated LBS particles due to the plastic nature of the coating material and this varied with the variation of the amount of coating. The tangential displacement required to reach steady state was delayed due to the presence of microparticles in the contact region of grains under shearing. Increased tangential loads and coefficient of friction were observed for the coated grains in comparison to the uncoated quartz grains.

Keywords: contact interface, microparticles, micro-mechanical behavior, quartz sand

Procedia PDF Downloads 192
338 Sintering of YNbO3:Eu3+ Compound: Correlation between Luminescence and Spark Plasma Sintering Effect

Authors: Veronique Jubera, Ka-Young Kim, U-Chan Chung, Amelie Veillere, Jean-Marc Heintz

Abstract:

Emitting materials and all solid state lasers are widely used in the field of optical applications and materials science as a source of excitement, instrumental measurements, medical applications, metal shaping etc. Recently promising optical efficiencies were recorded on ceramics which result from a cheaper and faster ways to obtain crystallized materials. The choice and optimization of the sintering process is the key point to fabricate transparent ceramics. It includes a high control on the preparation of the powder with the choice of an adequate synthesis, a pre-heat-treatment, the reproducibility of the sintering cycle, the polishing and post-annealing of the ceramic. The densification is the main factor needed to reach a satisfying transparency, and many technologies are now available. The symmetry of the unit cell plays a crucial role in the diffusion rate of the material. Therefore, the cubic symmetry compounds having an isotropic refractive index is preferred. The cubic Y3NbO7 matrix is an interesting host which can accept a high concentration of rare earth doping element and it has been demonstrated that SPS is an efficient way to sinter this material. The optimization of diffusion losses requires a microstructure of fine ceramics, generally less than one hundred nanometers. In this case, grain growth is not an obstacle to transparency. The ceramics properties are then isotropic thereby to free-shaping step by orienting the ceramics as this is the case for the compounds of lower symmetry. After optimization of the synthesis route, several SPS parameters as heating rate, holding, dwell time and pressure were adjusted in order to increase the densification of the Eu3+ doped Y3NbO7 pellets. The luminescence data coupled with X-Ray diffraction analysis and electronic diffraction microscopy highlight the existence of several distorted environments of the doping element in the studied defective fluorite-type host lattice. Indeed, the fast and high crystallization rate obtained to put in evidence a lack of miscibility in the phase diagram, being the final composition of the pellet driven by the ratio between niobium and yttrium elements. By following the luminescence properties, we demonstrate a direct impact on the SPS process on this material.

Keywords: emission, niobate of rare earth, Spark plasma sintering, lack of miscibility

Procedia PDF Downloads 268
337 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 189
336 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 305
335 Alkali Activation of Fly Ash, Metakaolin and Slag Blends: Fresh and Hardened Properties

Authors: Weiliang Gong, Lissa Gomes, Lucile Raymond, Hui Xu, Werner Lutze, Ian L. Pegg

Abstract:

Alkali-activated materials, particularly geopolymers, have attracted much interest in academia. Commercial applications are on the rise, as well. Geopolymers are produced typically by a reaction of one or two aluminosilicates with an alkaline solution at room temperature. Fly ash is an important aluminosilicate source. However, using low-Ca fly ash, the byproduct of burning hard or black coal reacts and sets slowly at room temperature. The development of mechanical durability, e.g., compressive strength, is slow as well. The use of fly ashes with relatively high contents ( > 6%) of unburned carbon, i.e., high loss on ignition (LOI), is particularly disadvantageous as well. This paper will show to what extent these impediments can be mitigated by mixing the fly ash with one or two more aluminosilicate sources. The fly ash used here is generated at the Orlando power plant (Florida, USA). It is low in Ca ( < 1.5% CaO) and has a high LOI of > 6%. The additional aluminosilicate sources are metakaolin and blast furnace slag. Binary fly ash-metakaolin and ternary fly ash-metakaolin-slag geopolymers were prepared. Properties of geopolymer pastes before and after setting have been measured. Fresh mixtures of aluminosilicates with an alkaline solution were studied by Vicat needle penetration, rheology, and isothermal calorimetry up to initial setting and beyond. The hardened geopolymers were investigated by SEM/EDS and the compressive strength was measured. Initial setting (fluid to solid transition) was indicated by a rapid increase in yield stress and plastic viscosity. The rheological times of setting were always smaller than the Vicat times of setting. Both times of setting decreased with increasing replacement of fly ash with blast furnace slag in a ternary fly ash-metakaolin-slag geopolymer system. As expected, setting with only Orlando fly ash was the slowest. Replacing 20% fly ash with metakaolin shortened the set time. Replacing increasing fractions of fly ash in the binary system by blast furnace slag (up to 30%) shortened the time of setting even further. The 28-day compressive strength increased drastically from < 20 MPa to 90 MPa. The most interesting finding relates to the calorimetric measurements. The use of two or three aluminosilicates generated significantly more heat (20 to 65%) than the calculated from the weighted sum of the individual aluminosilicates. This synergetic heat contributes or may be responsible for most of the increase of compressive strength of our binary and ternary geopolymers. The synergetic heat effect may be also related to increased incorporation of calcium in sodium aluminosilicate hydrate to form a hybrid (N,C)A-S-H) gel. The time of setting will be correlated with heat release and maximum heat flow.

Keywords: alkali-activated materials, binary and ternary geopolymers, blends of fly ash, metakaolin and blast furnace slag, rheology, synergetic heats

Procedia PDF Downloads 116
334 Dosimetric Comparison among Different Head and Neck Radiotherapy Techniques Using PRESAGE™ Dosimeter

Authors: Jalil ur Rehman, Ramesh C. Tailor, Muhammad Isa Khan, Jahnzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

Purpose: The purpose of this analysis was to investigate dose distribution of different techniques (3D-CRT, IMRT and VMAT) of head and neck cancer using 3-dimensional dosimeter called PRESAGETM Dosimeter. Materials and Methods: Computer tomography (CT) scans of radiological physics center (RPC) head and neck anthropomorphic phantom with both RPC standard insert and PRESAGETM insert were acquired separated with Philipp’s CT scanner and both CT scans were exported via DICOM to the Pinnacle version 9.4 treatment planning system (TPS). Each plan was delivered twice to the RPC phantom first containing the RPC standard insert having TLD and film dosimeters and then again containing the Presage insert having 3-D dosimeter (PRESAGETM) by using a Varian True Beam linear accelerator. After irradiation, the standard insert including point dose measurements (TLD) and planar Gafchromic® EBT film measurement were read using RPC standard procedure. The 3D dose distribution from PRESAGETM was read out with the Duke Midsized optical scanner dedicated to RPC (DMOS-RPC). Dose volume histogram (DVH), mean and maximal doses for organs at risk were calculated and compared among each head and neck technique. The prescription dose was same for all head and neck radiotherapy techniques which was 6.60 Gy/friction. Beam profile comparison and gamma analysis were used to quantify agreements among film measurement, PRESAGETM measurement and calculated dose distribution. Quality assurances of all plans were performed by using ArcCHECK method. Results: VMAT delivered the lowest mean and maximum doses to organ at risk (spinal cord, parotid) than IMRT and 3DCRT. Such dose distribution was verified by absolute dose distribution using thermoluminescent dosimeter (TLD) system. The central axial, sagittal and coronal planes were evaluated using 2D gamma map criteria(± 5%/3 mm) and results were 99.82% (axial), 99.78% (sagital), 98.38% (coronal) for VMAT plan and found the agreement between PRESAGE and pinnacle was better than IMRT and 3D-CRT plan excludes a 7 mm rim at the edge of the dosimeter. Profile showed good agreement for all plans between film, PRESAGE and pinnacle and 3D gamma was performed for PTV and OARs, VMAT and 3DCRT endow with better agreement than IMRT. Conclusion: VMAT delivered lowered mean and maximal doses to organs at risk and better PTV coverage during head and neck radiotherapy. TLD, EBT film and PRESAGETM dosimeters suggest that VMAT was better for the treatment of head and neck cancer than IMRT and 3D-CRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD, PRESAGETM

Procedia PDF Downloads 395
333 Mineralogical Study of the Triassic Clay of Maaziz and the Miocene Marl of Akrach in Morocco: Analysis and Evaluating of the Two Geomaterials for the Construction of Ceramic Bricks

Authors: Sahar El Kasmi, Ayoub Aziz, Saadia Lharti, Mohammed El Janati, Boubker Boukili, Nacer El Motawakil, Mayom Chol Luka Awan

Abstract:

Two types of geomaterials (Red Triassic clay from the Maaziz region and Yellow Pliocene clay from the Akrach region) were used to create different mixtures for the fabrication of ceramic bricks. This study investigated the influence of the Pliocene clay on the overall composition and mechanical properties of the Triassic clay. The red Triassic clay, sourced from Maaziz, underwent various mechanical processes and treatments to facilitate its transformation into ceramic bricks for construction. The triassic clay was subjected to a drying chamber and a heating chamber at 100°C to remove moisture. Subsequently, the dried clay samples were processed using a Planetary Babs ll Mill to reduce particle size and improve homogeneity. The resulting clay material was sieved, and the fine particles below 100 mm were collected for further analysis. In parallel, the Miocene marl obtained from the Akrach region was fragmented into finer particles and subjected to similar drying, grinding, and sieving procedures as the triassic clay. The two clay samples are then amalgamated and homogenized in different proportions. Precise measurements were taken using a weighing balance, and mixtures of 90%, 80%, and 70% Triassic clay with 10%, 20%, and 30% yellow clay were prepared, respectively. To evaluate the impact of Pliocene marl on the composition, the prepared clay mixtures were spread evenly and treated with a water modifier to enhance plasticity. The clay was then molded using a brick-making machine, and the initial manipulation process was observed. Additional batches were prepared with incremental amounts of Pliocene marl to further investigate its effect on the fracture behavior of the clay, specifically their resistance. The molded clay bricks were subjected to compression tests to measure their strength and resistance to deformation. Additional tests, such as water absorption tests, were also conducted to assess the overall performance of the ceramic bricks fabricated from the different clay mixtures. The results were analyzed to determine the influence of the Pliocene marl on the strength and durability of the Triassic clay bricks. The results indicated that the incorporation of Pliocene clay reduced the fracture of the triassic clay, with a noticeable reduction observed at 10% addition. No fractures were observed when 20% and 30% of yellow clay are added. These findings suggested that yellow clay can enhance the mechanical properties and structural integrity of red clay-based products.

Keywords: triassic clay, pliocene clay, mineralogical composition, geo-materials, ceramics, akach region, maaziz region, morocco.

Procedia PDF Downloads 88
332 Investigating the Aerosol Load of Eastern Mediterranean Basin with Sentinel-5p Satellite

Authors: Deniz Yurtoğlu

Abstract:

Aerosols directly affect the radiative balance of the earth by absorbing and/or scattering the sun rays reaching the atmosphere and indirectly affect the balance by acting as a nucleus in cloud formation. The composition, physical, and chemical properties of aerosols vary depending on their sources and the time spent in the atmosphere. The Eastern Mediterranean Basin has a high aerosol load that is formed from different sources; such as anthropogenic activities, desert dust outbreaks, and the spray of sea salt; and the area is subjected to atmospheric transport from other locations on the earth. This region, which includes the deserts of Africa, the Middle East, and the Mediterranean sea, is one of the most affected areas by climate change due to its location and the chemistry of the atmosphere. This study aims to investigate the spatiotemporal deviation of aerosol load in the Eastern Mediterranean Basin between the years 2018-2022 with the help of a new pioneer satellite of ESA (European Space Agency), Sentinel-5P. The TROPOMI (The TROPOspheric Monitoring Instrument) traveling on this low-Earth orbiting satellite is a UV (Ultraviolet)-sensing spectrometer with a resolution of 5.5 km x 3.5 km, which can make measurements even in a cloud-covered atmosphere. By using Absorbing Aerosol Index data produced by this spectrometer and special scripts written in Python language that transforms this data into images, it was seen that the majority of the aerosol load in the Eastern Mediterranean Basin is sourced from desert dust and anthropogenic activities. After retrieving the daily data, which was separated from the NaN values, seasonal analyses match with the normal aerosol variations expected, which are high in warm seasons and lower in cold seasons. Monthly analyses showed that in four years, there was an increase in the amount of Absorbing Aerosol Index in spring and winter by 92.27% (2019-2021) and 39.81% (2019-2022), respectively. On the other hand, in the summer and autumn seasons, a decrease has been observed by 20.99% (2018-2021) and 0.94% (2018-2021), respectively. The overall variation of the mean absorbing aerosol index from TROPOMI between April 2018 to April 2022 reflects a decrease of 115.87% by annual mean from 0.228 to -0.036. However, when the data is analyzed by the annual mean values of the years which have the data from January to December, meaning from 2019 to 2021, there was an increase of 57.82% increase (0.108-0.171). This result can be interpreted as the effect of climate change on the aerosol load and also, more specifically, the effect of forest fires that happened in the summer months of 2021.

Keywords: aerosols, eastern mediterranean basin, sentinel-5p, tropomi, aerosol index, remote sensing

Procedia PDF Downloads 67
331 Microstructural Interactions of Ag and Sc Alloying Additions during Casting and Artificial Ageing to a T6 Temper in a A356 Aluminium Alloy

Authors: Dimitrios Bakavos, Dimitrios Tsivoulas, Chaowalit Limmaneevichitr

Abstract:

Aluminium cast alloys, of the Al-Si system, are widely used for shape castings. Their microstructures can be further improved on one hand, by alloying modification and on the other, by optimised artificial ageing. In this project four hypoeutectic Al-alloys, the A356, A356+ Ag, A356+Sc, and A356+Ag+Sc have been studied. The interactions of Ag and Sc during solidification and artificial ageing at 170°C to a T6 temper have been investigated in details. The evolution of the eutectic microstructure is studied by thermal analysis and interrupted solidification. The ageing kinetics of the alloys has been identified by hardness measurements. The precipitate phases, number density, and chemical composition has been analysed by means of transmission electron microscopy (TEM) and EDS analysis. Furthermore, the SHT effect onto the Si eutectic particles for the four alloys has been investigated by means of optical microscopy, image analysis, and the UTS strength has been compared with the UTS of the alloys after casting. The results suggest that the Ag additions, significantly enhance the ageing kinetics of the A356 alloy. The formation of β” precipitates were kinetically accelerated and an increase of 8% and 5% in peak hardness strength has been observed compared to the base A356 and A356-Sc alloy. The EDS analysis demonstrates that Ag is present on the β” precipitate composition. After prolonged ageing 100 hours at 170°C, the A356-Ag exhibits 17% higher hardness strength compared to the other three alloys. During solidification, Sc additions change the macroscopic eutectic growth mode to the propagation of a defined eutectic front from the mold walls opposite to the heat flux direction. In contrast, Ag has no significance effect on the solidification mode revealing a macroscopic eutectic growth similar to A356 base alloy. However, the mechanical strength of the as cast A356-Ag, A356-Sc, and A356+Ag+Sc additions has increased by 5, 30, and 35 MPa, respectively. The outcome is a tribute to the refining of the eutectic Si that takes place which it is strong in the A356-Sc alloy and more profound when silver and scandium has been combined. Moreover after SHT the Al alloy with the highest mechanical strength, is the one with Ag additions, in contrast to the as-cast condition where the Sc and Sc+Ag alloy was the strongest. The increase of strength is mainly attributed to the dissolution of grain boundary precipitates the increase of the solute content into the matrix, the spherodisation, and coarsening of the eutectic Si. Therefore, we could safely conclude for an A356 hypoeutectic alloy additions of: Ag exhibits a refining effect on the Si eutectic which is improved when is combined with Sc. In addition Ag enhance, the ageing kinetics increases the hardness and retains its strength at prolonged artificial ageing in a Al-7Si 0.3Mg hypoeutectic alloy. Finally the addition of Sc is beneficial due to the refinement of the α-Al grain and modification-refinement of the eutectic Si increasing the strength of the as-cast product.

Keywords: ageing, casting, mechanical strength, precipitates

Procedia PDF Downloads 498
330 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 129
329 Determination of the Relative Humidity Profiles in an Internal Micro-Climate Conditioned Using Evaporative Cooling

Authors: M. Bonello, D. Micallef, S. P. Borg

Abstract:

Driven by increased comfort standards, but at the same time high energy consciousness, energy-efficient space cooling has become an essential aspect of building design. Its aims are simple, aiming at providing satisfactory thermal comfort for individuals in an interior space using low energy consumption cooling systems. In this context, evaporative cooling is both an energy-efficient and an eco-friendly cooling process. In the past two decades, several academic studies have been performed to determine the resulting thermal comfort produced by an evaporative cooling system, including studies on temperature profiles, air speed profiles, effect of clothing and personnel activity. To the best knowledge of the authors, no studies have yet considered the analysis of relative humidity (RH) profiles in a space cooled using evaporative cooling. Such a study will determine the effect of different humidity levels on a person's thermal comfort and aid in the consequent improvement designs of such future systems. Under this premise, the research objective is to characterise the resulting different RH profiles in a chamber micro-climate using the evaporative cooling system in which the inlet air speed, temperature and humidity content are varied. The chamber shall be modelled using Computational Fluid Dynamics (CFD) in ANSYS Fluent. Relative humidity shall be modelled using a species transport model while the k-ε RNG formulation is the proposed turbulence model that is to be used. The model shall be validated with measurements taken using an identical test chamber in which tests are to be conducted under the different inlet conditions mentioned above, followed by the verification of the model's mesh and time step. The verified and validated model will then be used to simulate other inlet conditions which would be impractical to conduct in the actual chamber. More details of the modelling and experimental approach will be provided in the full paper The main conclusions from this work are two-fold: the micro-climatic relative humidity spatial distribution within the room is important to consider in the context of investigating comfort at occupant level; and the investigation of a human being's thermal comfort (based on Predicted Mean Vote – Predicted Percentage Dissatisfied [PMV-PPD] values) and its variation with different locations of relative humidity values. The study provides the necessary groundwork for investigating the micro-climatic RH conditions of environments cooled using evaporative cooling. Future work may also target the analysis of ways in which evaporative cooling systems may be improved to better the thermal comfort of human beings, specifically relating to the humidity content around a sedentary person.

Keywords: chamber micro-climate, evaporative cooling, relative humidity, thermal comfort

Procedia PDF Downloads 155
328 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.

Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine

Procedia PDF Downloads 8
327 Strategies for Improving and Sustaining Quality in Higher Education

Authors: Anshu Radha Aggarwal

Abstract:

Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.

Keywords: learning outcomes, curriculum development, pedagogy, outcome based education

Procedia PDF Downloads 524
326 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 192
325 Electrodeposition of Silicon Nanoparticles Using Ionic Liquid for Energy Storage Application

Authors: Anjali Vanpariya, Priyanka Marathey, Sakshum Khanna, Roma Patel, Indrajit Mukhopadhyay

Abstract:

Silicon (Si) is a promising negative electrode material for lithium-ion batteries (LiBs) due to its low cost, non-toxicity, and a high theoretical capacity of 4200 mAhg⁻¹. The primary challenge of the application of Si-based LiBs is large volume expansion (~ 300%) during the charge-discharge process. Incorporation of graphene, carbon nanotubes (CNTs), morphological control, and nanoparticles was utilized as effective strategies to tackle volume expansion issues. However, molten salt methods can resolve the issue, but high-temperature requirement limits its application. For sustainable and practical approach, room temperature (RT) based methods are essentially required. Use of ionic liquids (ILs) for electrodeposition of Si nanostructures can possibly resolve the issue of temperature as well as greener media. In this work, electrodeposition of Si nanoparticles on gold substrate was successfully carried out in the presence of ILs media, 1-butyl-3-methylimidazolium-bis (trifluoromethyl sulfonyl) imide (BMImTf₂N) at room temperature. Cyclic voltammetry (CV) suggests the sequential reduction of Si⁴⁺ to Si²⁺ and then Si nanoparticles (SiNs). The structure and morphology of the electrodeposited SiNs were investigated by FE-SEM and observed interconnected Si nanoparticles of average particle size ⁓100-200 nm. XRD and XPS data confirm the deposition of Si on Au (111). The first discharge-charge capacity of Si anode material has been found to be 1857 and 422 mAhg⁻¹, respectively, at current density 7.8 Ag⁻¹. The irreversible capacity of the first discharge-charge process can be attributed to the solid electrolyte interface (SEI) formation via electrolyte decomposition, and trapped Li⁺ inserted into the inner pores of Si. Pulverization of SiNs results in the creation of a new active site, which facilitates the formation of new SEI in the subsequent cycles leading to fading in a specific capacity. After 20 cycles, charge-discharge profiles have been stabilized, and a reversible capacity of 150 mAhg⁻¹ is retained. Electrochemical impedance spectroscopy (EIS) data shows the decrease in Rct value from 94.7 to 47.6 kΩ after 50 cycles of charge-discharge, which demonstrates the improvements of the interfacial charge transfer kinetics. The decrease in the Warburg impedance after 50 cycles of charge-discharge measurements indicates facile diffusion in fragmented and smaller Si nanoparticles. In summary, Si nanoparticles deposited on gold substrate using ILs as media and characterized well with different analytical techniques. Synthesized material was successfully utilized for LiBs application, which is well supported by CV and EIS data.

Keywords: silicon nanoparticles, ionic liquid, electrodeposition, cyclic voltammetry, Li-ion battery

Procedia PDF Downloads 125
324 Comparison of Cardiometabolic Risk Factors in Lean Versus Overweight/Obese Peri-Urban Female Adolescent School Learners in Mthatha, South Africa: A Pilot Case Control Study

Authors: Benedicta N. Nkeh-Chungag, Constance R. Sewani-Rusike, Isaac M. Malema, Daniel T. Goon, Oladele V. Adeniyi, Idowu A. Ajayi

Abstract:

Background: Childhood and adolescent obesity is an important predictor of adult cardiometabolic diseases. Current data on age- and gender-specific cardiometabolic risk factors are lacking in the peri-urban Eastern Cape Province, South Africa. However, such information is important in designing innovative strategies to promote healthy living among children and adolescents. The purpose of this pilot study was to compare and determine the extent of cardiometabolic risk factors between samples of lean and overweight/obese adolescent population in a peri-urban township of South Africa. Methods: In this case-control study, age-matched, non-pregnant and non-lactating female adolescents consisting of equal number of cases (50 overweight/obese) and control (50 lean) participated in the study. Fasting venous blood samples were obtained for total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), triglyceride (Trig), highly sensitive C-reactive protein (hsCRP) and blood sugar. Anthropometric measurements included weight, height, waist and hip circumferences. Body mass index was calculated. Blood pressure was measured; and metabolic syndrome was assessed using appropriate diagnostic criteria for children and adolescents. Results: Of the 76 participants with complete data, 12/38 of the overweight/obese and 1/38 of the lean group met the criteria for adolescent metabolic syndrome. All cardiometabolic risk factors were elevated in the overweight/obese group compared with the lean group: low HDL-C (RR = 2.21), elevated TC (RR = 1.23), elevated LDL-C (RR = 1.42), elevated Trig (RR = 1.73), and elevated hsCRP (RR = 1.9). There were significant atherosclerotic indices among the overweight/obese group compared with the lean group: TC/HDL and LDL/HDL (2.99±0.91 vs 2.63±0.48; p=0.016 and 1.73±0.61 vs 1.41±0.46; p= 0.014, respectively). Conclusion: There are multiple cardiometabolic risk factors among the overweight/obese female adolescent group compared with lean adolescent group in the study. Female adolescent who are overweight and obese have higher relative risks of developing cardiometabolic diseases compared with their lean counterparts in the peri-urban Mthatha, South Africa. School health programme focusing on promoting physical exercise, healthy eating and keeping appropriate weight are needed in the country.

Keywords: adolescents, cardiometabolic risk factors, obesity, peri-urban South Africa

Procedia PDF Downloads 474
323 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia

Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez

Abstract:

Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.

Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener

Procedia PDF Downloads 368
322 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics

Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier

Abstract:

Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.

Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing

Procedia PDF Downloads 21
321 A Comparison of qCON/qNOX to the Bispectral Index as Indices of Antinociception in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway

Authors: Roya Yumul, Ofelia Loani Elvir-Lazo, Sevan Komshian, Ruby Wang, Jun Tang

Abstract:

BACKGROUND: An objective means for monitoring the anti-nociceptive effects of perioperative medications has long been desired as a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively. To this end, electroencephalogram (EEG) based tools including BIS and qCON were designed to provide information about the depth of sedation while qNOX was produced to inform on the degree of antinociception. The goal of this study was to compare the reliability of qCON/qNOX to BIS as specific indicators of response to nociceptive stimulation. METHODS: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board (IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to the endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed on all patients prior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. RESULTS: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from 74 ±13 mm Hg at baseline to 84 ± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76 ± 12 BPM at baseline to 80 ± 13 BPM during noxious stimuli [p=0.078] respectively). CONCLUSION: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices

Keywords: antinociception, BIS, general anesthesia, LMA, qCON/qNOX

Procedia PDF Downloads 137
320 Storms Dynamics in the Black Sea in the Context of the Climate Changes

Authors: Eugen Rusu

Abstract:

The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.

Keywords: Black Sea, extreme storms, SWAN simulations, waves

Procedia PDF Downloads 248
319 Randomized Controlled Trial for the Management of Pain and Anxiety Using Virtual Reality During the Care of Older Hospitalized Patients

Authors: Corbel Camille, Le Cerf Flora, Capriz Françoise, Vaillant-Ciszewicz Anne-Julie, Breaud Jean, Guerin Olivier, Corveleyn Xavier

Abstract:

Background: The medical environment can generate stressful and anxiety-provoking situations for patients, particularly during painful care procedures for the older population. These stressful environments have deleterious effects on the quality of care and can even put the patient at risk and set the care team up for failure. The search for a solution is, therefore, imperative. The development of new technologies, such as virtual reality (VR), seems to be an answer to this problem. Objectives: The objective of this study is to compare the effects of virtual reality on pain and anxiety when caring for older hospitalized people with the effects of usual care. More precisely, different individual factors (age, cognitive level, individual preferences, etc.) and different virtual reality universes (personalized or non-personalized) are studied to understand the role of these factors in reducing pain and anxiety during care procedures. The aim of this study is to improve the quality of life of patients and caregivers in their work environment. Method: This mono-centered, randomized, controlled study was conducted from September 2023 to September 2024 on 120 participants recruited from the geriatric departments of the Cimiez Hospital, Nice, France. Participants are randomized into three groups: a control group, a personalized VR group and a non-personalized VR group. Each participant is followed during a painful care session. Data are collected before, during and after the care, using measures of pain (Algoplus and numerical scale) and anxiety (Hospital anxiety scale and numerical scale). Physiological assessments with an oximeter are also performed to collect both heart and respiratory rate measurements. The implementation of the care will be assessed among healthcare providers to evaluate its effects on the difficulty and fatigue associated with the care. Additionally, a questionnaire (System Usability Scale) will be administered at the conclusion of the study to determine the willingness of healthcare providers to integrate VR into their daily care practices. Result: The preliminary results indicate significant effects on anxiety (p=.001) and pain (p=<.001) following the VR intervention during care, as compared to the control group. Conclusion: The preliminary results suggest that VRI appears to be a suitable and effective method for reducing anxiety and pain among older hospitalized individuals compared with standard care. Finally, the experiences of healthcare professionals involved will also be considered to assess the impact of these interventions on working conditions and patient support.

Keywords: anxiety, care, pain, older adults, virtual reality

Procedia PDF Downloads 73
318 Capacity Building in Dietary Monitoring and Public Health Nutrition in the Eastern Mediterranean Region

Authors: Marisol Warthon-Medina, Jenny Plumb, Ayoub Aljawaldeh, Mark Roe, Ailsa Welch, Maria Glibetic, Paul M. Finglas

Abstract:

Similar to Western Countries, the Eastern Mediterranean Region (EMR) also presents major public health issues associated with the increased consumption of sugar, fat, and salt. Therefore, one of the policies of the World Health Organization’s (WHO) EMR is to reduce the intake of salt, sugar, and fat (Saturated fatty acids, trans fatty acids) to address the risk of non-communicable diseases (i.e. diabetes, cardiovascular disease, cancer) and obesity. The project objective is to assess status and provide training and capacity development in the use of improved standardized methodologies for updated food composition data, dietary intake methods, use of suitable biomarkers of nutritional value and determine health outcomes in low and middle-income countries (LMIC). Training exchanges have been developed with clusters of countries created resulting from regional needs including Sudan, Egypt and Jordan; Tunisia, Morocco, and Mauritania; and other Middle Eastern countries. This capacity building will lead to the development and sustainability of up-to-date national and regional food composition databases in LMIC for use in dietary monitoring assessment in food and nutrient intakes. Workshops were organized to provide training and capacity development in the use of improved standardized methodologies for food composition and food intake. Training needs identified and short-term scientific missions organized for LMIC researchers including (1) training and knowledge exchange workshops, (2) short-term exchange of researchers, (3) development and application of protocols and (4) development of strategies to reduce sugar and fat intake. An initial training workshop, Morocco 2018 was attended by 25 participants from 10 EMR countries to review status and support development of regional food composition. 4 training exchanges are in progress. The use of improved standardized methodologies for food composition and dietary intake will produce robust measurements that will reinforce dietary monitoring and policy in LMIC. The capacity building from this project will lead to the development and sustainability of up-to-date national and regional food composition databases in EMR countries. Supported by the UK Medical Research Council, Global Challenges Research Fund, (MR/R019576/1), and the World Health Organization’s Eastern Mediterranean Region.

Keywords: dietary intake, food composition, low and middle-income countries, status.

Procedia PDF Downloads 162
317 Comparison of the Effects of Alprazolam and Zaleplon on Anxiety Levels in Patients Undergoing Abdominal Gynecological Surgery

Authors: Shekoufeh Behdad, Amirhossein Yadegari, Leila Ghodrati, Saman Yadegari

Abstract:

Context: Preoperative anxiety is a common psychological reaction experienced by all patients undergoing surgery. It can have negative effects on the patient's well-being and even impact surgical outcomes. Therefore, finding effective interventions to reduce preoperative anxiety is important in improving patient care. Research Aim: The aim of this study is to compare the effects of oral administration of zaleplon (5 mg) and alprazolam (0.5 mg) on preoperative anxiety levels in women undergoing gynecological abdominal surgery. Methodology: This study is a double-blind, randomized clinical trial conducted after receiving approval from the university's ethics committee and obtaining written informed consent from the patients. The night before the surgery, patients were randomly assigned to receive either 0.5 mg of alprazolam or 5 mg of zaleplon orally. Anxiety levels, measured using a 10-cm visual analog scale, and hemodynamic variables (blood pressure and heart rate) were assessed before drug administration and on the morning of the operation after the patient entered the pre-operation room. Findings: The study found that there were no significant differences in mean anxiety levels or hemodynamic variables before and after administration of either drug in both groups (P value > 0.05). This suggests that both 0.5 mg of alprazolam and 5 mg of zaleplon effectively reduce preoperative anxiety in women undergoing abdominal surgery without serious side effects. Theoretical Importance: This study contributes to the understanding of the effectiveness of alprazolam and zaleplon in reducing preoperative anxiety. It adds to the existing literature on pharmacological interventions for anxiety management, specifically in the context of gynecological abdominal surgery. Data Collection: Data for this study were collected through the assessment of anxiety levels using a visual analog scale and measuring hemodynamic variables, including systolic, diastolic, and mean arterial blood pressures, as well as heart rate. These measurements were taken before drug administration and on the morning of the surgery. Analysis Procedures: Statistical analysis was performed to compare the mean anxiety levels and hemodynamic variables before and after drug administration in the two groups. The significance of the differences was determined using appropriate statistical tests. Questions Addressed: This study aimed to answer the question of whether there are differences in the effects of alprazolam and zaleplon on preoperative anxiety levels in women undergoing gynecological abdominal surgery. Conclusion: The oral administration of both 0.5 mg of alprazolam and 5 mg of zaleplon the night before surgery effectively reduces preoperative anxiety in women undergoing abdominal surgery. These findings have important implications for the management of preoperative anxiety and can contribute to improving the overall surgical experience for patients.

Keywords: zaleplon, alprazolam, premedication, abdominal surgery

Procedia PDF Downloads 80
316 Employee Wellbeing: The Key to Organizational Success

Authors: Crystal Hoole

Abstract:

Employee well-being has become an area of concern for top executives and organizations worldwide. In developing countries such as South Africa, and especially in the educational sector, employees have to deal with anxiety, stress, fear, student protests, political and economic turmoil and excessive work demands on a daily basis. Research has shown that workplaces with higher resilience and better well-being strategies also report higher productivity, increased innovation, better employee retention and better employee engagement. Many organisations offer standard employee assistance programs and once-off short interventions. However, most of these well-being initiatives are perceived as ineffective. Some of the criticism centers around a lack of holistic well-being approaches, no proof of the success of well-being initiatives, not being part of the organization’s strategies and a lack of genuine leadership support. This study attempts to illustrate how a holistic well-being intervention, over a period of 100 days, is far more effective in impacting organizational outcomes. A quasi-experimental design, with a pre-test and pro-test design with a randomization strategy, will be used. Measurements of organizational outcomes are taken at three-time points throughout the study, before, middle and after. The constructs that will be measured are employee engagement, psychological well-being, organizational culture and trust, and perceived stress. The well-being is imitative follows a salutogenesis approach and is aimed at building resilience through focusing on six focal areas, namely sleep, mindful eating, exercise, love, gratitude and appreciation, breath work and mindfulness, and finally, purpose. Certain organizational constructs, including employee engagement, psychological well-being, organizational culture and trust and perceived stress, will be measured at three-time points during the study, namely before, middle and after. A quasi-experimental, pre-test and post-test design will be applied, also using a randomization strategy to limit potential bias. Repeated measure ANCOVA will be used to determine whether any change occurred over the period of 100 days. The study will take place in a Higher Education institution in South Africa. The sample will consist of academic and administrative staff. Participants will be assigned to a test and control group. All participants will complete a survey measuring employee engagement, psychological well-being, organizational culture and trust, and perceived stress. Only the test group will undergo the well-being intervention. The study envisages contributing on several levels: Firstly, the study hopes to find a positive increase in the various well-being indicators of the participants who participated in the study and secondly to illustrate that a longer more holistic approach is successful in improving organisational success (as measured in the various organizational outcomes).

Keywords: wellbeing, resilience, organizational success, intervention

Procedia PDF Downloads 99