Search results for: thin layer chromatography-flame ionization detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6542

Search results for: thin layer chromatography-flame ionization detection

752 The Magnitude and Associated Factors of Coagulation Abnormalities Among Liver Disease Patients at the University of Gondar Comprehensive Specialized Hospital Northwest, Ethiopia

Authors: Melkamu A., Woldu B., Sitotaw C., Seyoum M., Aynalem M.

Abstract:

Background: Liver disease is any condition that affects the liver cells and their function. It is directly linked to coagulation disorders since most coagulation factors are produced by the liver. Therefore, this study aimed to assess the magnitude and associated factors of coagulation abnormalities among liver disease patients. Methods: A cross-sectional study was conducted from August to October 2022 among 307 consecutively selected study participants at the University of Gondar Comprehensive Specialized Hospital. Sociodemographic and clinical data were collected using a structured questionnaire and data extraction sheet, respectively. About 2.7 mL of venous blood was collected and analyzed by the Genrui CA51 coagulation analyzer. Data was entered into Epi-data and exported to STATA version 14 software for analysis. The finding was described in terms of frequencies and proportions. Factors associated with coagulation abnormalities were analyzed by bivariable and multivariable logistic regression. Result: In this study, a total of 307 study participants were included. Of them, the magnitude of prolonged Prothrombin Time (PT) and Activated Partial Thromboplastin Time (APTT) were 68.08% and 63.51%, respectively. The presence of anemia (AOR = 2.97, 95% CI: 1.26, 7.03), a lack of a vegetable feeding habit (AOR = 2.98, 95% CI: 1.42, 6.24), no history of blood transfusion (AOR = 3.72, 95% CI: 1.78, 7.78), and lack of physical exercise (AOR = 3.23, 95% CI: 1.60, 6.52) were significantly associated with prolonged PT. While the presence of anaemia (AOR = 3.02; 95% CI: 1.34, 6.76), lack of vegetable feeding habit (AOR = 2.64; 95% CI: 1.34, 5.20), no history of blood transfusion (AOR = 2.28; 95% CI: 1.09, 4.79), and a lack of physical exercise (AOR = 2.35; 95% CI: 1.16, 4.78) were significantly associated with abnormal APTT. Conclusion: Patients with liver disease had substantial coagulation problems. Being anemic, having a transfusion history, lack of physical activity, and lack of vegetables showed significant association with coagulopathy. Therefore, early detection and management of coagulation abnormalities in liver disease patients are critical.

Keywords: coagulation, liver disease, PT, Aptt

Procedia PDF Downloads 40
751 CeO₂-Decorated Graphene-coated Nickel Foam with NiCo Layered Double Hydroxide for Efficient Hydrogen Evolution Reaction

Authors: Renzhi Qi, Zhaoping Zhong

Abstract:

Under the dual pressure of the global energy crisis and environmental pollution, avoiding the consumption of non-renewable fossil fuels based on carbon as the energy carrier and developing and utilizing non-carbon energy carriers are the basic requirements for the future new energy economy. Electrocatalyst for water splitting plays an important role in building sustainable and environmentally friendly energy conversion. The oxygen evolution reaction (OER) is essentially limited by the slow kinetics of multi-step proton-electron transfer, which limits the efficiency and cost of water splitting. In this work, CeO₂@NiCo-NRGO/NF hybrid materials were prepared using nickel foam (NF) and nitrogen-doped reduced graphene oxide (NRGO) as conductive substrates by multi-step hydrothermal method and were used as highly efficient catalysts for OER. The well-connected nanosheet array forms a three-dimensional (3D) network on the substrate, providing a large electrochemical surface area with abundant catalytic active sites. The doping of CeO₂ in NiCo-NRGO/NF electrocatalysts promotes the dispersion of substances and its synergistic effect in promoting the activation of reactants, which is crucial for improving its catalytic performance against OER. The results indicate that CeO₂@NiCo-NRGO/NF only requires a lower overpotential of 250 mV to drive the current density of 10 mA cm-2 for an OER reaction of 1 M KOH, and exhibits excellent stability at this current density for more than 10 hours. The double layer capacitance (Cdl) values show that CeO₂@NiCo-NRGO/NF significantly affects the interfacial conductivity and electrochemically active surface area. The hybrid structure could promote the catalytic performance of oxygen evolution reaction, such as low initial potential, high electrical activity, and excellent long-term durability. The strategy for improving the catalytic activity of NiCo-LDH can be used to develop a variety of other electrocatalysts for water splitting.

Keywords: CeO₂, reduced graphene oxide, NiCo-layered double hydroxide, oxygen evolution reaction

Procedia PDF Downloads 63
750 Determination of Vinpocetine in Tablets with the Vinpocetine-Selective Electrode and Possibilities of Application in Pharmaceutical Analysis

Authors: Faisal A. Salih

Abstract:

Vinpocetine (Vin) is an ethyl ester of apovincamic acid and is a semisynthetic derivative of vincamine, an alkaloid from plants of the genus Periwinkle (plant) vinca minor. It was found that this compound stimulates cerebral metabolism: it increases the uptake of glucose and oxygen, as well as the consumption of these substances by the brain tissue. Vinpocetine enhances the flow of blood in the brain and has a vasodilating, antihypertensive, and antiplatelet effect. Vinpocetine seems to improve the human ability to acquire new memories and restore memories that have been lost. This drug has been clinically used for the treatment of cerebrovascular disorders such as stroke and dementia memory disorders, as well as in ophthalmology and otorhinolaryngology. It has no side effects, and no toxicity has been reported when using vinpocetine for a long time. For the quantitative determination of Vin in dosage forms, the HPLC methods are generally used. A promising alternative is potentiometry with Vin- selective electrode, which does not require expensive equipment and materials. Another advantage of the potentiometric method is that the pills and solutions for injections can be used directly without separation from matrix components, which reduces both analysis time and cost. In this study, it was found that the choice of a good plasticizer an electrode with the following membrane composition: PVC (32.8 wt.%), ortho-nitrophenyl octyl ether (66.6 wt.%), tetrakis-4-chlorophenyl borate (0.6 wt.%) exhibits excellent analytical performance: lower detection limit (LDL) 1.2•10⁻⁷ M, linear response range (LRR) 1∙10⁻³–3.9∙10⁻⁶ M, the slope of the electrode function 56.2±0.2 mV/decade). Vin masses per average tablet weight determined by direct potentiometry (DP) and potentiometric titration (PT) methods for the two different sets of 10 tablets were (100.35±0.2–100.36±0.1) mg for two sets of blister packs. The mass fraction of Vin in individual tablets, determined using DP, was (9.87 ± 0.02–10.16 ±0.02) mg, while the RSD was (0.13–0.35%). The procedure has very good reproducibility, and excellent compliance with the declared amounts was observed.

Keywords: vinpocetine, potentiometry, ion selective electrode, pharmaceutical analysis

Procedia PDF Downloads 50
749 Nanofiltration Membranes with Deposyted Polyelectrolytes: Caracterisation and Antifouling Potential

Authors: Viktor Kochkodan

Abstract:

The main problem arising upon water treatment and desalination using pressure driven membrane processes such as microfiltration, ultrafiltration, nanofiltration and reverse osmosis is membrane fouling that seriously hampers the application of the membrane technologies. One of the main approaches to mitigate membrane fouling is to minimize adhesion interactions between a foulant and a membrane and the surface coating of the membranes with polyelectrolytes seems to be a simple and flexible technique to improve the membrane fouling resistance. In this study composite polyamide membranes NF-90, NF-270, and BW-30 were modified using electrostatic deposition of polyelectrolyte multilayers made from various polycationic and polyanionic polymers of different molecular weights. Different anionic polyelectrolytes such as: poly(sodium 4-styrene sulfonate), poly(vinyl sulfonic acid, sodium salt), poly(4-styrene sulfonic acid-co-maleic acid) sodium salt, poly(acrylic acid) sodium salt (PA) and cationic polyelectrolytes such as poly(diallyldimethylammonium chloride), poly(ethylenimine) and poly(hexamethylene biguanide were used for membrane modification. An effect of deposition time and a number of polyelectrolyte layers on the membrane modification has been evaluated. It was found that degree of membrane modification depends on chemical nature and molecular weight of polyelectrolytes used. The surface morphology of the prepared composite membranes was studied using atomic force microscopy. It was shown that the surface membrane roughness decreases significantly as a number of the polyelectrolyte layers on the membrane surface increases. This smoothening of the membrane surface might contribute to the reduction of membrane fouling as lower roughness most often associated with a decrease in surface fouling. Zeta potentials and water contact angles on the membrane surface before and after modification have also been evaluated to provide addition information regarding membrane fouling issues. It was shown that the surface charge of the membranes modified with polyelectrolytes could be switched between positive and negative after coating with a cationic or an anionic polyelectrolyte. On the other hand, the water contact angle was strongly affected when the outermost polyelectrolyte layer was changed. Finally, a distinct difference in the performance of the noncoated membranes and the polyelectrolyte modified membranes was found during treatment of seawater in the non-continuous regime. A possible mechanism of the higher fouling resistance of the modified membranes has been discussed.

Keywords: contact angle, membrane fouling, polyelectrolytes, surface modification

Procedia PDF Downloads 235
748 Preliminary Studies of Antibiofouling Properties in Wrinkled Hydrogel Surfaces

Authors: Mauricio A. Sarabia-Vallejos, Carmen M. Gonzalez-Henriquez, Adolfo Del Campo-Garcia, Aitzibier L. Cortajarena, Juan Rodriguez-Hernandez

Abstract:

In this study, it was explored the formation and the morphological differences between wrinkled hydrogel patterns obtained via generation of surface instabilities. The slight variations in the polymerization conditions produce important changes in the material composition and pattern structuration. The compounds were synthesized using three main components, i.e. an amphiphilic monomer, hydroxyethyl methacrylate (HEMA), a hydrophobic monomer, trifluoroethyl methacrylate (TFMA), and a hydrophilic crosslinking agent, poly(ethylene glycol) diacrylate (PEGDA). The first part of this study was related to the formation of wrinkled surfaces using only HEMA and PEGDA and varying the amount of water added in the reaction. The second part of this study involves the gradual insertion of TFMA into the hydrophilic reaction mixture. Interestingly, the manipulation of the chemical composition of this hydrogel affects both surface morphology and physicochemical characteristics of the patterns, inducing transitions from one particular type of structure (wrinkles or ripples) to different ones (creases, folds, and crumples). Contact angle measurements show that the insertion of TFMA produces a slight decrease in surface wettability of the samples, remaining however highly hydrophilic (contact angle below 45°). More interestingly, by using confocal Raman spectroscopy, important information about the wrinkle formation mechanism is obtained. The procedure involving two consecutive thermal and photopolymerization steps lead to a “pseudo” two-layer system. Thus, upon photopolymerization, the surface is crosslinked to a higher extent than the bulk and water evaporation drives the formation of wrinkled surfaces. Finally, cellular, and bacterial proliferation studies were performed to the samples, showing that the amount of TFMA included in each sample slightly affects the proliferation of both (bacteria and cells), but in the case of bacteria, the morphology of the sample also plays an important role, importantly reducing the bacterial proliferation.

Keywords: antibiofouling properties, hydrophobic/hydrophilic balance, morphologic characterization, wrinkled hydrogel patterns

Procedia PDF Downloads 143
747 Study of Variation of Winds Behavior on Micro Urban Environment with Use of Fuzzy Logic for Wind Power Generation: Case Study in the Cities of Arraial do Cabo and São Pedro da Aldeia, State of Rio de Janeiro, Brazil

Authors: Roberto Rosenhaim, Marcos Antonio Crus Moreira, Robson da Cunha, Gerson Gomes Cunha

Abstract:

This work provides details on the wind speed behavior within cities of Arraial do Cabo and São Pedro da Aldeia located in the Lakes Region of the State of Rio de Janeiro, Brazil. This region has one of the best potentials for wind power generation. In interurban layer, wind conditions are very complex and depend on physical geography, size and orientation of buildings and constructions around, population density, and land use. In the same context, the fundamental surface parameter that governs the production of flow turbulence in urban canyons is the surface roughness. Such factors can influence the potential for power generation from the wind within the cities. Moreover, the use of wind on a small scale is not fully utilized due to complexity of wind flow measurement inside the cities. It is difficult to accurately predict this type of resource. This study demonstrates how fuzzy logic can facilitate the assessment of the complexity of the wind potential inside the cities. It presents a decision support tool and its ability to deal with inaccurate information using linguistic variables created by the heuristic method. It relies on the already published studies about the variables that influence the wind speed in the urban environment. These variables were turned into the verbal expressions that are used in computer system, which facilitated the establishment of rules for fuzzy inference and integration with an application for smartphones used in the research. In the first part of the study, challenges of the sustainable development which are described are followed by incentive policies to the use of renewable energy in Brazil. The next chapter follows the study area characteristics and the concepts of fuzzy logic. Data were collected in field experiment by using qualitative and quantitative methods for assessment. As a result, a map of the various points is presented within the cities studied with its wind viability evaluated by a system of decision support using the method multivariate classification based on fuzzy logic.

Keywords: behavior of winds, wind power, fuzzy logic, sustainable development

Procedia PDF Downloads 273
746 Semi-Automatic Segmentation of Mitochondria on Transmission Electron Microscopy Images Using Live-Wire and Surface Dragging Methods

Authors: Mahdieh Farzin Asanjan, Erkan Unal Mumcuoglu

Abstract:

Mitochondria are cytoplasmic organelles of the cell, which have a significant role in the variety of cellular metabolic functions. Mitochondria act as the power plants of the cell and are surrounded by two membranes. Significant morphological alterations are often due to changes in mitochondrial functions. A powerful technique in order to study the three-dimensional (3D) structure of mitochondria and its alterations in disease states is Electron microscope tomography. Detection of mitochondria in electron microscopy images due to the presence of various subcellular structures and imaging artifacts is a challenging problem. Another challenge is that each image typically contains more than one mitochondrion. Hand segmentation of mitochondria is tedious and time-consuming and also special knowledge about the mitochondria is needed. Fully automatic segmentation methods lead to over-segmentation and mitochondria are not segmented properly. Therefore, semi-automatic segmentation methods with minimum manual effort are required to edit the results of fully automatic segmentation methods. Here two editing tools were implemented by applying spline surface dragging and interactive live-wire segmentation tools. These editing tools were applied separately to the results of fully automatic segmentation. 3D extension of these tools was also studied and tested. Dice coefficients of 2D and 3D for surface dragging using splines were 0.93 and 0.92. This metric for 2D and 3D for live-wire method were 0.94 and 0.91 respectively. The root mean square symmetric surface distance values of 2D and 3D for surface dragging was measured as 0.69, 0.93. The same metrics for live-wire tool were 0.60 and 2.11. Comparing the results of these editing tools with the results of automatic segmentation method, it shows that these editing tools, led to better results and these results were more similar to ground truth image but the required time was higher than hand-segmentation time

Keywords: medical image segmentation, semi-automatic methods, transmission electron microscopy, surface dragging using splines, live-wire

Procedia PDF Downloads 149
745 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera

Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl

Abstract:

Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.

Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition

Procedia PDF Downloads 93
744 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 92
743 Ion Beam Writing and Implantation in Graphene Oxide, Reduced Graphene Oxide and Polyimide Through Polymer Mask for Sensorics Applications

Authors: Jan Luxa, Vlastimil Mazanek, Petr Malinsky, Alexander Romanenko, Mariapompea Cutroneo, Vladimir Havranek, Josef Novak, Eva Stepanovska, Anna Mackova, Zdenek Sofer

Abstract:

Using accelerated energetic ions is an interesting method for the introduction of structural changes in various carbon-based materials. This way, the properties can be altered in two ways: a) the ions lead to the formation of conductive pathways in graphene oxide structures due to the elimination of oxygen functionalities and b) doping with selected ions to form metal nanoclusters, thus increasing the conductivity. In this work, energetic beams were employed in two ways to prepare capacitor structures in graphene oxide (GO), reduced graphene oxide (rGO) and polyimide (PI) on a micro-scale. The first method revolved around using ion beam writing with a focused ion beam, and the method involved ion implantation via a polymeric mask. To prepare the polymeric mask, a direct spin-coating of PMMA on top of the foils was used. Subsequently, proton beam writing and development in isopropyl alcohol were employed. Finally, the mask was removed using acetone solvent. All three materials were exposed to ion beams with an energy of 2.5-5 MeV and an ion fluence of 3.75x10¹⁴ cm-² (1800 nC.mm-²). Thus, prepared microstructures were thoroughly characterized by various analytical methods, including Scanning electron microscopy (SEM) with Energy-Dispersive X-ray spectroscopy (EDS), X-ray Photoelectron spectroscopy (XPS), micro-Raman spectroscopy, Rutherford Back-scattering Spectroscopy (RBS) and Elastic Recoil Detection Analysis (ERDA) spectroscopy. Finally, these materials were employed and tested as sensors for humidity using electrical conductivity measurements. The results clearly demonstrate that the type of ions, their energy and fluence all have a significant influence on the sensory properties of thus prepared sensors.

Keywords: graphene, graphene oxide, polyimide, ion implantation, sensors

Procedia PDF Downloads 67
742 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods

Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz

Abstract:

Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.

Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure

Procedia PDF Downloads 62
741 Improvements in Transient Testing in The Transient REActor Test (TREAT) with a Choice of Filter

Authors: Harish Aryal

Abstract:

The safe and reliable operation of nuclear reactors has always been one of the topmost priorities in the nuclear industry. Transient testing allows us to understand the time-dependent behavior of the neutron population in response to either a planned change in the reactor conditions or unplanned circumstances. These unforeseen conditions might occur due to sudden reactivity insertions, feedback, power excursions, instabilities, and accidents. To study such behavior, we need transient testing, which is like car crash testing, to estimate the durability and strength of a car design. In nuclear designs, such transient testing can simulate a wide range of accidents due to sudden reactivity insertions and helps to study the feasibility and integrity of the fuel to be used in certain reactor types. This testing involves a high neutron flux environment and real-time imaging technology with advanced instrumentation with appropriate accuracy and resolution to study the fuel slumping behavior. With the aid of transient testing and adequate imaging tools, it is possible to test the safety basis for reactor and fuel designs that serves as a gateway in licensing advanced reactors in the future. To that end, it is crucial to fully understand advanced imaging techniques both analytically and via simulations. This paper presents an innovative method of supporting real-time imaging of fuel pins and other structures during transient testing. The major fuel-motion detection device that is studied in this dissertation is the Hodoscope which requires collimators. This paper provides 1) an MCNP model and simulation of a Transient Reactor Test (TREAT) core with a central fuel element replaced by a slotted fuel element that provides an open path between test samples and a hodoscope detector and 2) a choice of good filter to improve image resolution.

Keywords: hodoscope, transient testing, collimators, MCNP, TREAT, hodogram, filters

Procedia PDF Downloads 58
740 Innovative Screening Tool Based on Physical Properties of Blood

Authors: Basant Singh Sikarwar, Mukesh Roy, Ayush Goyal, Priya Ranjan

Abstract:

This work combines two bodies of knowledge which includes biomedical basis of blood stain formation and fluid communities’ wisdom that such formation of blood stain depends heavily on physical properties. Moreover biomedical research tells that different patterns in stains of blood are robust indicator of blood donor’s health or lack thereof. Based on these valuable insights an innovative screening tool is proposed which can act as an aide in the diagnosis of diseases such Anemia, Hyperlipidaemia, Tuberculosis, Blood cancer, Leukemia, Malaria etc., with enhanced confidence in the proposed analysis. To realize this powerful technique, simple, robust and low-cost micro-fluidic devices, a micro-capillary viscometer and a pendant drop tensiometer are designed and proposed to be fabricated to measure the viscosity, surface tension and wettability of various blood samples. Once prognosis and diagnosis data has been generated, automated linear and nonlinear classifiers have been applied into the automated reasoning and presentation of results. A support vector machine (SVM) classifies data on a linear fashion. Discriminant analysis and nonlinear embedding’s are coupled with nonlinear manifold detection in data and detected decisions are made accordingly. In this way, physical properties can be used, using linear and non-linear classification techniques, for screening of various diseases in humans and cattle. Experiments are carried out to validate the physical properties measurement devices. This framework can be further developed towards a real life portable disease screening cum diagnostics tool. Small-scale production of screening cum diagnostic devices is proposed to carry out independent test.

Keywords: blood, physical properties, diagnostic, nonlinear, classifier, device, surface tension, viscosity, wettability

Procedia PDF Downloads 363
739 Bioinformatics Identification of Rare Codon Clusters in Proteins Structure of HBV

Authors: Abdorrasoul Malekpour, Mohammad Ghorbani Mojtaba Mortazavi, Mohammadreza Fattahi, Mohammad Hassan Meshkibaf, Ali Fakhrzad, Saeid Salehi, Saeideh Zahedi, Amir Ahmadimoghaddam, Parviz Farzadnia Dr., Mohammadreza Hajyani Asl Bs

Abstract:

Hepatitis B as an infectious disease has eight main genotypes (A–H). The aim of this study is to Bioinformatically identify Rare Codon Clusters (RCC) in proteins structure of HBV. For detection of protein family accession numbers (Pfam) of HBV proteins; used of uni-prot database and Pfam search tool were used. Obtained Pfam IDs were analyzed in Sherlocc program and RCCs in HBV proteins were detected. In further, the structures of TrEMBL entries proteins studied in PDB database and 3D structures of the HBV proteins and locations of RCCs were visualized and studied using Swiss PDB Viewer software. Pfam search tool have found nine significant hits and 0 insignificant hits in 3 frames. Results of Pfams studied in the Sherlocc program show this program not identified RCCs in the external core antigen (PF08290) and truncated HBeAg protein (PF08290). By contrast the RCCs become identified in Hepatitis core antigen (PF00906) Large envelope protein S (PF00695), X protein (PF00739), DNA polymerase (viral) N-terminal domain (PF00242) and Protein P (Pf00336). In HBV genome, seven RCC identified that found in hepatitis core antigen, large envelope protein S and DNA polymerase proteins and proteins structures of TrEMBL entries sequences that reported in Sherlocc program outputs are not complete. Based on situation of RCC in structure of HBV proteins, it suggested those RCCs are important in HBV life cycle. We hoped that this study provide a new and deep perspective in protein research and drug design for treatment of HBV.

Keywords: rare codon clusters, hepatitis B virus, bioinformatic study, infectious disease

Procedia PDF Downloads 465
738 Improving Search Engine Performance by Removing Indexes to Malicious URLs

Authors: Durga Toshniwal, Lokesh Agrawal

Abstract:

As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.

Keywords: web crawler, malwares, seeds, drive-by-downloads, security

Procedia PDF Downloads 217
737 Human Factors Integration of Chemical, Biological, Radiological and Nuclear Response: Systems and Technologies

Authors: Graham Hancox, Saydia Razak, Sue Hignett, Jo Barnes, Jyri Silmari, Florian Kading

Abstract:

In the event of a Chemical, Biological, Radiological and Nuclear (CBRN) incident rapidly gaining, situational awareness is of paramount importance and advanced technologies have an important role to play in improving detection, identification, monitoring (DIM) and patient tracking. Understanding how these advanced technologies can fit into current response systems is essential to ensure they are optimally designed, usable and meet end-users’ needs. For this reason, Human Factors (Ergonomics) methods have been used within an EU Horizon 2020 project (TOXI-Triage) to firstly describe (map) the hierarchical structure in a CBRN response with adapted Accident Map (AcciMap) methodology. Secondly, Hierarchical Task Analysis (HTA) has been used to describe and review the sequence of steps (sub-tasks) in a CBRN scenario response as a task system. HTA methodology was then used to map one advanced technology, ‘Tag and Trace’, which tags an element (people, sample and equipment) with a Near Field Communication (NFC) chip in the Hot Zone to allow tracing of (monitoring), for example casualty progress through the response. This HTA mapping of the Tag and Trace system showed how the provider envisaged the technology being used, allowing for review and fit with the current CBRN response systems. These methodologies have been found to be very effective in promoting and supporting a dialogue between end-users and technology providers. The Human Factors methods have given clear diagrammatic (visual) representations of how providers see their technology being used and how end users would actually use it in the field; allowing for a more user centered approach to the design process. For CBRN events usability is critical as sub-optimum design of technology could add to a responders’ workload in what is already a chaotic, ambiguous and safety critical environment.

Keywords: AcciMap, CBRN, ergonomics, hierarchical task analysis, human factors

Procedia PDF Downloads 194
736 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization

Authors: Michael Howard

Abstract:

The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.

Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL

Procedia PDF Downloads 35
735 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix

Authors: Wesley Teskey, Vedran Glavas, Julian Wegener

Abstract:

Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.

Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design

Procedia PDF Downloads 91
734 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition

Authors: Habtamu Garoma Debela, Gemechis File Duressa

Abstract:

In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.

Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent

Procedia PDF Downloads 131
733 Monitoring Deforestation Using Remote Sensing And GIS

Authors: Tejaswi Agarwal, Amritansh Agarwal

Abstract:

Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from Indian institute of remote Sensing (IIRS), Dehradoon in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud free and did not belong to dry and leafless season. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean, we have analysed the change in ground biomass. Through this paper, we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques, it is clearly shown that the total forest cover is continuously degrading and transforming into various land use/land cover category.

Keywords: remote sensing, deforestation, supervised classification, NDVI, change detection

Procedia PDF Downloads 1159
732 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 361
731 In-situ and Laboratory Characterization of Fiji Lateritic Soils

Authors: Faijal Ali, Darga Kumar N., Ravikant Singh, Rajnil Lal

Abstract:

Fiji has three major landforms such as plains, low mountains, and hills. The low land soils are formed on beach sand. Fiji soils contain high concentration of iron (III), aluminum oxides and hydroxides. The soil possesses reddish or yellowish colour. The characterization of lateritic soils collected from different locations along the national highway in Viti Levu, Fiji Islands. The research has been carried out mainly to understand the physical and strength properties to assess their suitability for the highway and building construction. In this paper, the field tests such as dynamic cone penetrometer test, field vane shear, field density and laboratory tests such as unconfined compression stress, compaction, grain size analysis and Atterberg limits are conducted. The test results are analyzed and presented. From the results, it is revealed that the soils are having more percentage of silt and clay which is more than 80% and 5 to 15% of fine to medium sand is noticed. The dynamic cone penetrometer results up to 3m depth had similar penetration resistance. For the first 1m depth, the rate of penetration is found 300mm per 3 to 4 blows. In all the sites it is further noticed that the rate of penetration at depths beyond 1.5 m is decreasing for the same number of blows as compared to the top soil. From the penetration resistance measured through dynamic cone penetrometer test, the California bearing ratio and allowable bearing capacities are 4 to 5% and 50 to 100 kPa for the top 1m layer and below 1m these values are increasing. The California bearing ratio of these soils for below 1m depth is in the order of 10% to 20%. The safe bearing capacity of these soils below 1m and up to 3m depth is varying from 150 kPa to 250 kPa. The field vane shear was measured within a depth of 1m from the surface and the values were almost similar varying from 60 kPa to 120 kPa. The liquid limit and plastic limits of these soils are in the range of 40 to 60% and 20 to 25%. Overall it is found that the top 1m soil along the national highway in majority places possess a soft to medium stiff behavior with low to medium bearing capacity as well low California bearing ratio values. It is recommended to ascertain these soils behavior in terms of geotechnical parameters before taking up any construction activity.

Keywords: California bearing ratio, dynamic cone penetrometer test, field vane shear, unconfined compression stress.

Procedia PDF Downloads 170
730 NDVI as a Measure of Change in Forest Biomass

Authors: Amritansh Agarwal, Tejaswi Agarwal

Abstract:

Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000 km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from USGS website in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud and aerosol free by making using of FLAASH atmospheric correction technique. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean we have analysed the change in ground biomass. Through this paper we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques it is clearly shows that the total forest cover is continuously degrading and transforming into various land use/land cover category.

Keywords: remote sensing, deforestation, supervised classification, NDVI change detection

Procedia PDF Downloads 378
729 Fast Prototyping of Precise, Flexible, Multiplexed, Printed Electrochemical Enzyme-Linked Immunosorbent Assay System for Point-of-Care Biomarker Quantification

Authors: Zahrasadat Hosseini, Jie Yuan

Abstract:

Point-of-care (POC) diagnostic devices based on lab-on-a-chip (LOC) technology have the potential to revolutionize medical diagnostics. However, the development of an ideal microfluidic system based on LOC technology for diagnostics purposes requires overcoming several obstacles, such as improving sensitivity, selectivity, portability, cost-effectiveness, and prototyping methods. While numerous studies have introduced technologies and systems that advance these criteria, existing systems still have limitations. Electrochemical enzyme-linked immunosorbent assay (e-ELISA) in a LOC device offers numerous advantages, including enhanced sensitivity, decreased turnaround time, minimized sample and analyte consumption, reduced cost, disposability, and suitability for miniaturization, integration, and multiplexing. In this study, we present a novel design and fabrication method for a microfluidic diagnostic platform that integrates screen-printed electrochemical carbon/silver chloride electrodes on flexible printed circuit boards with flexible, multilayer, polydimethylsiloxane (PDMS) microfluidic networks to accurately manipulate and pre-immobilize analytes for performing electrochemical enzyme-linked immunosorbent assay (e-ELISA) for multiplexed quantification of blood serum biomarkers. We further demonstrate fast, cost-effective prototyping, as well as accurate and reliable detection performance of this device for quantification of interleukin-6-spiked samples through electrochemical analytics methods. We anticipate that our invention represents a significant step towards the development of user-friendly, portable, medical-grade, POC diagnostic devices.

Keywords: lab-on-a-chip, point-of-care diagnostics, electrochemical ELISA, biomarker quantification, fast prototyping

Procedia PDF Downloads 65
728 Fast Prototyping of Precise, Flexible, Multiplexed, Printed Electrochemical Enzyme-Linked Immunosorbent Assay Platform for Point-of-Care Biomarker Quantification

Authors: Zahrasadat Hosseini, Jie Yuan

Abstract:

Point-of-care (POC) diagnostic devices based on lab-on-a-chip (LOC) technology have the potential to revolutionize medical diagnostics. However, the development of an ideal microfluidic system based on LOC technology for diagnostics purposes requires overcoming several obstacles, such as improving sensitivity, selectivity, portability, cost-effectiveness, and prototyping methods. While numerous studies have introduced technologies and systems that advance these criteria, existing systems still have limitations. Electrochemical enzyme-linked immunosorbent assay (e-ELISA) in a LOC device offers numerous advantages, including enhanced sensitivity, decreased turnaround time, minimized sample and analyte consumption, reduced cost, disposability, and suitability for miniaturization, integration, and multiplexing. In this study, we present a novel design and fabrication method for a microfluidic diagnostic platform that integrates screen-printed electrochemical carbon/silver chloride electrodes on flexible printed circuit boards with flexible, multilayer, polydimethylsiloxane (PDMS) microfluidic networks to accurately manipulate and pre-immobilize analytes for performing electrochemical enzyme-linked immunosorbent assay (e-ELISA) for multiplexed quantification of blood serum biomarkers. We further demonstrate fast, cost-effective prototyping, as well as accurate and reliable detection performance of this device for quantification of interleukin-6-spiked samples through electrochemical analytics methods. We anticipate that our invention represents a significant step towards the development of user-friendly, portable, medical-grade POC diagnostic devices.

Keywords: lab-on-a-chip, point-of-care diagnostics, electrochemical ELISA, biomarker quantification, fast prototyping

Procedia PDF Downloads 67
727 Controlled Doping of Graphene Monolayer

Authors: Vedanki Khandenwal, Pawan Srivastava, Kartick Tarafder, Subhasis Ghosh

Abstract:

We present here the experimental realization of controlled doping of graphene monolayers through charge transfer by trapping selected organic molecules between the graphene layer and underlying substrates. This charge transfer between graphene and trapped molecule leads to controlled n-type or p-type doping in monolayer graphene (MLG), depending on whether the trapped molecule acts as an electron donor or an electron acceptor. Doping controllability has been validated by a shift in corresponding Raman peak positions and a shift in Dirac points. In the transfer characteristics of field effect transistors, a significant shift of Dirac point towards positive or negative gate voltage region provides the signature of p-type or n-type doping of graphene, respectively, as a result of the charge transfer between graphene and the organic molecules trapped within it. In order to facilitate the charge transfer interaction, it is crucial for the trapped molecules to be situated in close proximity to the graphene surface, as demonstrated by findings in Raman and infrared spectroscopies. However, the mechanism responsible for this charge transfer interaction has remained unclear at the microscopic level. Generally, it is accepted that the dipole moment of adsorbed molecules plays a crucial role in determining the charge-transfer interaction between molecules and graphene. However, our findings clearly illustrate that the doping effect primarily depends on the reactivity of the constituent atoms in the adsorbed molecules rather than just their dipole moment. This has been illustrated by trapping various molecules at the graphene−substrate interface. Dopant molecules such as acetone (containing highly reactive oxygen atoms) promote adsorption across the entire graphene surface. In contrast, molecules with less reactive atoms, such as acetonitrile, tend to adsorb at the edges due to the presence of reactive dangling bonds. In the case of low-dipole moment molecules like toluene, there is a lack of substantial adsorption anywhere on the graphene surface. Observation of (i) the emergence of the Raman D peak exclusively at the edges for trapped molecules without reactive atoms and throughout the entire basal plane for those with reactive atoms, and (ii) variations in the density of attached molecules (with and without reactive atoms) to graphene with their respective dipole moments provides compelling evidence to support our claim. Additionally, these observations were supported by first principle density functional calculations.

Keywords: graphene, doping, charge transfer, liquid phase exfoliation

Procedia PDF Downloads 48
726 Design of the Ice Rink of the Future

Authors: Carine Muster, Prina Howald Erika

Abstract:

Today's ice rinks are important energy consumers for the production and maintenance of ice. At the same time, users demand that the other rooms should be tempered or heated. The building complex must equally provide cooled and heated zones, which does not translate as carbon-zero ice rinks. The study provides an analysis of how the civil engineering sector can significantly impact minimizing greenhouse gas emissions and optimizing synergies across an entire ice rink complex. The analysis focused on three distinct aspects: the layout, including the volumetric layout of the premises present in an ice rink; the materials chosen that can potentially use the most ecological structural approach; and the construction methods based on innovative solutions to reduce carbon footprint. The first aspect shows that the organization of the interior volumes and defining the shape of the rink play a significant role. Its layout makes the use and operation of the premises as efficient as possible, thanks to the differentiation between heated and cooled volumes while optimising heat loss between the different rooms. The sprayed concrete method, which is still little known, proves that it is possible to achieve the strength of traditional concrete for the structural aspect of the load-bearing and non-load-bearing walls of the ice rink by using materials excavated from the construction site and providing a more ecological and sustainable solution. The installation of an empty sanitary space underneath the ice floor, making it independent of the rest of the structure, provides a natural insulating layer, preventing the transfer of cold to the rest of the structure and reducing energy losses. The addition of active pipes as part of the foundation of the ice floor, coupled with a suitable system, gives warmth in the winter and storage in the summer; this is all possible thanks to the natural heat in the ground. In conclusion, this study provides construction recommendations for future ice rinks with a significantly reduced energy demand, using some simple preliminary design concepts. By optimizing the layout, materials, and construction methods of ice rinks, the civil engineering sector can play a key role in reducing greenhouse gas emissions and promoting sustainability.

Keywords: climate change, energy optimization, green building, sustainability

Procedia PDF Downloads 46
725 Salt Tolerance of Potato: Genetically Engineered with Atriplex canescens BADH Gene Driven by 3 Copies of CAMV35s Promoter

Authors: Arfan Ali, Muhammad Shahzad Iqbal, Idrees Ahmad Nasir

Abstract:

Potato (Solanum tuberosum L.) is ranked among the top leading staple foods in the world. Salinity adversely affects potato crop yield and quality. Therefore, increased level of salt tolerance is a key factor to ensure high yield. The present study focused on the Agrobacterium-mediated transformation of Atriplex canescens betaine aldehyde dehydrogenase (BADH) gene, using single, double and triple CAMV35s promoter to improve salt tolerance in potato. Detection of seven potato lines harboring BADH gene, followed by identification of T-DNA insertions, determination of transgenes copies no through Southern Hybridization and quantification of BADH protein through Enzyme Linked Immunosorbent Assay were considered in this study. The results clearly depict that the salt tolerance of potato was found to be promoter-dependent, as the potato transgenic lines with triple promoter showed 4.4 times more glycine betaine production which consequently leads towards high resistance to salt stress as compared to transgenic potato lines with single and double promoters having least production of glycine betaine. Moreover, triple promoter transgenic potato lines have also shown lower levels of H2O2, malondialdehyde (MDA), relative electrical conductivity, high proline and chlorophyll content as compared other two lines having a single and double promoter. Insilco analysis also confirmed that Atriplex canescens BADH has the tendency to interact with sodium ions and water molecules. Taken together these facts it can be concluded that over-expression of BADH under triple CAMV35s promoter with more glycine betaine, chlorophyll & MDA contents, high relative quantities of other metabolites results in an enhanced level of salt tolerance in potato.

Keywords: Atriplex canescens, BADH, CAMV35s promotor, potato, Solanum tubersum

Procedia PDF Downloads 255
724 Analyzing the Changing Pattern of Nigerian Vegetation Zones and Its Ecological and Socio-Economic Implications Using Spot-Vegetation Sensor

Authors: B. L. Gadiga

Abstract:

This study assesses the major ecological zones in Nigeria with the view to understanding the spatial pattern of vegetation zones and the implications on conservation within the period of sixteen (16) years. Satellite images used for this study were acquired from the SPOT-VEGETATION between 1998 and 2013. The annual NDVI images selected for this study were derived from SPOT-4 sensor and were acquired within the same season (November) in order to reduce differences in spectral reflectance due to seasonal variations. The images were sliced into five classes based on literatures and knowledge of the area (i.e. <0.16 Non-Vegetated areas; 0.16-0.22 Sahel Savannah; 0.22-0.40 Sudan Savannah, 0.40-0.47 Guinea Savannah and >0.47 Forest Zone). Classification of the 1998 and 2013 images into forested and non forested areas showed that forested area decrease from 511,691 km2 in 1998 to 478,360 km2 in 2013. Differencing change detection method was performed on 1998 and 2013 NDVI images to identify areas of ecological concern. The result shows that areas undergoing vegetation degradation covers an area of 73,062 km2 while areas witnessing some form restoration cover an area of 86,315 km2. The result also shows that there is a weak correlation between rainfall and the vegetation zones. The non-vegetated areas have a correlation coefficient (r) of 0.0088, Sahel Savannah belt 0.1988, Sudan Savannah belt -0.3343, Guinea Savannah belt 0.0328 and Forest belt 0.2635. The low correlation can be associated with the encroachment of the Sudan Savannah belt into the forest belt of South-eastern part of the country as revealed by the image analysis. The degradation of the forest vegetation is therefore responsible for the serious erosion problems witnessed in the South-east. The study recommends constant monitoring of vegetation and strict enforcement of environmental laws in the country.

Keywords: vegetation, NDVI, SPOT-vegetation, ecology, degradation

Procedia PDF Downloads 196
723 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 64