Search results for: synthetic dataset
380 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms
Authors: Selim M. Khan
Abstract:
Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America
Procedia PDF Downloads 96379 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations
Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan
Abstract:
Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers
Procedia PDF Downloads 76378 Development of a New Margarine Added Date Seed Oil: Characteristics and Chemical Composition of Date Seed Oil
Authors: Hamitri-Guerfi Fatiha, Madani Khodir, Hadjal Samir, Kati Djamel, Youyou Ahcene
Abstract:
Date palm (Phoenix dactylifera) is a principal fruit that is grown in many regions of the world, resulting in a surplus production of dates. Algeria is considered to be one of the date producing countries. Date seeds (pits) have been a problem to the date industry as a waste stream. However, finding a way to make a profit on the pits would benefit date farmers substantially. This work concentrated on the valorization of date seed oils. A preliminary study was carried out on three varieties (soft, half soft, and dry) and we selected the dry variety. This work concerns the valorization of the date seed oil of the dry variety: ‘Mech Degla’ by its incorporation in a food formulation: margarine of table. Lipid extraction was carried out by hot extraction with the soxhlet; the extracts obtained are rich in fat contents, the results gave outputs of 13.21±0.21 %. The antioxidant activity of extracted oils was studied by the test of DPPH, the content polyphenols as well as the anti-radicalaire activity. The analysis of fatty acids was made by CPG. Thus, it comes out from our results that the recovered fat contents are interesting and considerable. A formulation of the margarine ‘BIO’ was elaborated on the scale industrialist by the addition of the extracts of date seeds ‘Mech-Degla’ oil in order to substitute a synthetic additive. The physicochemical characteristics of the elaborate margarines prove to be in conformity with the standards set by the Algerian companies. The texture of the elaborate margarine has an acceptable color, an aspect brilliant and homogeneous, it is plastic and easy to paste having an index of required SFC and the margarine melts easily in the mouth. Moreover, the evaluation of oxidative stability is carried out by the test of Rancimat. The result obtained reported that the margarine enriched with date seed oil, proved more resistant to oxidation, than the margarine without extract, which is improved much during incorporation of the extracts simultaneously. By conclusion, considering the content of polyphénols noted in the two extracts (aqueous and oily), we can exhort the scientific community to become aware of the treasures of our country especially the wonders of the south which are the dates and theirs under products (pits).Keywords: antioxydant activity, date seed oil, quality characteristics, margarine
Procedia PDF Downloads 416377 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 257376 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site
Authors: Duncan Fraser
Abstract:
A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.Keywords: 3D model, contaminated land, Leapfrog, remediation
Procedia PDF Downloads 131375 Comparing Quality of Care in Family Planning Services in Primary Public and Private Health Care Facilities in Ethiopia
Authors: Gizachew Assefa Tessema, Mohammad Afzal Mahmood, Judith Streak Gomersall, Caroline O. Laurence
Abstract:
Introduction: Improving access to quality family planning services is the key to improving health of women and children. However, there is currently little evidence on the quality and scope of family planning services provided by private facilities, and this compares to the services provided in public facilities in Ethiopia. This is important, particularly in determining whether the government should further expand the roles of the private sector in the delivery of family planning facility. Methods: This study used the 2014 Ethiopian Services Provision Assessment Plus (ESPA+) survey dataset for comparing the structural aspects of quality of care in family planning services. The present analysis used a weighted sample of 1093 primary health care facilities (955 public and 138 private). This study employed logistic regression analysis to compare key structural variables between public and private facilities. While taking the structural variables as an outcome for comparison, the facility type (public vs private) were used as the key exposure of interest. Results: When comparing availability of basic amenities (infrastructure), public facilities were less likely to have functional cell phones (AOR=0.12; 95% CI: 0.07-0.21), and water supply (AOR=0.29; 95% CI: 0.15-0.58) than private facilities. However, public facilities were more likely to have staff available 24 hours in the facility (AOR=0.12; 95% CI: 0.07-0.21), providers having family planning related training in the past 24 months (AOR=4.4; 95% CI: 2.51, 7.64) and possessing guidelines/protocols (AOR= 3.1 95% CI: 1.87, 5.24) than private facilities. Moreover, comparing the availability of equipment, public facilities had higher odds of having pelvic model for IUD demonstration (AOR=2.60; 95% CI: 1.35, 5.01) and penile model for condom demonstration (AOR=2.51; 95% CI: 1.32, 4.78) than private facilities. Conclusion: The present study suggests that Ethiopian government needs to provide emphasis towards the private sector in terms of providing family planning guidelines and training on family planning services for their staff. It is also worthwhile for the public health facilities to allocate funding for improving the availability of basic amenities. Implications for policy and/ or practice: This study calls policy makers to design appropriate strategies in providing opportunities for training a health care providers working in private health facility.Keywords: quality of care, family planning, public-private, Ethiopia
Procedia PDF Downloads 353374 Novel Adomet Analogs as Tools for Nucleic Acids Labeling
Authors: Milda Nainyte, Viktoras Masevicius
Abstract:
Biological methylation is a methyl group transfer from S-adenosyl-L-methionine (AdoMet) onto N-, C-, O- or S-nucleophiles in DNA, RNA, proteins or small biomolecules. The reaction is catalyzed by enzymes called AdoMet-dependent methyltransferases (MTases), which represent more than 3 % of the proteins in the cell. As a general mechanism, the methyl group from AdoMet replaces a hydrogen atom of nucleophilic center producing methylated DNA and S-adenosyl-L-homocysteine (AdoHcy). Recently, DNA methyltransferases have been used for the sequence-specific, covalent labeling of biopolymers. Two types of MTase catalyzed labeling of biopolymers are known, referred as two-step and one-step. During two-step labeling, an alkylating fragment is transferred onto DNA in a sequence-specific manner and then the reporter group, such as biotin, is attached for selective visualization using suitable chemistries of coupling. This approach of labeling is quite difficult and the chemical hitching does not always proceed at 100 %, but in the second step the variety of reporter groups can be selected and that gives the flexibility for this labeling method. In the one-step labeling, AdoMet analog is designed with the reporter group already attached to the functional group. Thus, the one-step labeling method would be more comfortable tool for labeling of biopolymers in order to prevent additional chemical reactions and selection of reaction conditions. Also, time costs would be reduced. However, effective AdoMet analog appropriate for one-step labeling of biopolymers and containing cleavable bond, required for reduction of PCR interferation, is still not known. To expand the practical utility of this important enzymatic reaction, cofactors with activated sulfonium-bound side-chains have been produced and can serve as surrogate cofactors for a variety of wild-type and mutant DNA and RNA MTases enabling covalent attachment of these chains to their target sites in DNA, RNA or proteins (the approach named methyltransferase-directed Transfer of Activated Groups, mTAG). Compounds containing hex-2-yn-1-yl moiety has proved to be efficient alkylating agents for labeling of DNA. Herein we describe synthetic procedures for the preparation of N-biotinoyl-N’-(pent-4-ynoyl)cystamine starting from the coupling of cystamine with pentynoic acid and finally attaching the biotin as a reporter group. The synthesis of the first AdoMet based cofactor containing a cleavable reporter group and appropriate for one-step labeling was developed.Keywords: adoMet analogs, DNA alkylation, cofactor, methyltransferases
Procedia PDF Downloads 195373 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 130372 Antioxidative, Anticholinesterase and Anti-Neuroinflammatory Properties of Malaysian Brown and Green Seaweeds
Authors: Siti Aisya Gany, Swee Ching Tan, Sook Yee Gan
Abstract:
Diminished antioxidant defense or increased production of reactive oxygen species in the biological system can result in oxidative stress which may lead to various neurodegenerative diseases including Alzheimer’s disease (AD). Microglial activation also contributes to the progression of AD by producing several pro-inflammatory cytokines, nitric oxide (NO), and prostaglandin E2 (PGE2). Oxidative stress and inflammation have been reported to be possible pathophysiological mechanisms underlying AD. In addition, the cholinergic hypothesis postulates that memory impairment in patient with AD is also associated with the deficit of cholinergic function in the brain. Although a number of drugs have been approved for the treatment of AD, most of these synthetic drugs have diverse side effects and yield relatively modest benefits. Marine algae have great potential in pharmaceutical and biomedical applications as they are valuable sources of bioactive properties such as anti-coagulation, anti-microbial, anti-oxidative, anti-cancer and anti-inflammatory. Hence, this study aimed to provide an overview of the properties of Malaysian seaweeds (Padina australis, Sargassum polycystum and Caulerpa racemosa) in inhibiting oxidative stress, neuroinflammation and cholinesterase enzymes. All tested samples significantly exhibit potent DPPH and moderate Superoxide anion radical scavenging ability (P<0.05). Hexane and methanol extracts of S. polycystum exhibited the most potent radical scavenging ability with IC50 values of 0.1572 ± 0.004 mg/ml and 0.8493 ± 0.02 for DPPH and ABTS assays, respectively. Hexane extract of C. racemosa gave the strongest superoxide radical inhibitory effect (IC50 of 0.3862± 0.01 mg/ml). Most seaweed extracts significantly inhibited the production of cytokine (IL-6, IL-1 β, TNFα) and NO in a concentration-dependent manner without causing significant cytotoxicity to the lipopolysaccharide (LPS)-stimulated microglia cells (P<0.05). All extracts suppressed cytokine and NO level by more than 80% at the concentration of 0.4mg/ml. In addition, C. racemosa and S. polycystum also showed anti-acetylcholinesterase activities with the IC50 values ranging from 0.086-0.115 mg/ml. Moreover, C. racemosa and P. australis were also found to be active against butyrylcholinesterase with IC50 values ranging from 0.118-0.287 mg/ml.Keywords: anti-cholinesterase, anti-oxidative, neuroinflammation, seaweeds
Procedia PDF Downloads 663371 The Role and Effects of Communication on Occupational Safety: A Review
Authors: Pieter A. Cornelissen, Joris J. Van Hoof
Abstract:
The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).Keywords: communication, feedback, occupational safety, review
Procedia PDF Downloads 302370 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya
Authors: Abdel Rahman Khider Hassan
Abstract:
This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha
Procedia PDF Downloads 206369 Tunable Crystallinity of Zinc Gallogermanate Nanoparticles via Organic Ligand-Assisted Biphasic Hydrothermal Synthesis
Authors: Sarai Guerrero, Lijia Liu
Abstract:
Zinc gallogermanate (ZGGO) is a persistent phosphor that can emit in the near infrared (NIR) range once dopped with Cr³⁺ enabling its use for in-vivo deep-tissue bio-imaging. Such a property also allows for its application in cancer diagnosis and therapy. Given this, work into developing a synthetic procedure that can be done using common laboratory instruments and equipment as well as understanding ZGGO overall, is in demand. However, the ZGGO nanoparticles must have a size compatible for cell intake to occur while still maintaining sufficient photoluminescence. The nanoparticle must also be made biocompatible by functionalizing the surface for hydrophilic solubility and for high particle uniformity in the final product. Additionally, most research is completed on doped ZGGO, leaving a gap in understanding the base form of ZGGO. It also leaves a gap in understanding how doping affects the synthesis of ZGGO. In this work, the first step of optimizing the particle size via the crystalline size of ZGGO was done with undoped ZGGO using the organic acid, oleic acid (OA) for organic ligand-assisted biphasic hydrothermal synthesis. The effects of this synthesis procedure on ZGGO’s crystallinity were evaluated using Powder X-Ray Diffraction (PXRD). OA was selected as the capping ligand as experiments have shown it beneficial in synthesizing sub-10 nm zinc gallate (ZGO) nanoparticles as well as palladium nanocrystals and magnetite (Fe₃O₄) nanoparticles. Later it is possible to substitute OA with a different ligand allowing for hydrophilic solubility. Attenuated Total Reflection Fourier-Transform Infrared (ATR-FTIR) was used to investigate the surface of the nanoparticle to investigate and verify that OA had capped the nanoparticle. PXRD results showed that using this procedure led to improved crystallinity, comparable to the high-purity reagents used on the ZGGO nanoparticles. There was also a change in the crystalline size of the ZGGO nanoparticles. ATR-FTIR showed that once capped ZGGO cannot be annealed as doing so will affect the OA. These results point to this new procedure positively affecting the crystallinity of ZGGO nanoparticles. There are also repeatable implying the procedure is a reliable source of highly crystalline ZGGO nanoparticles. With this completed, the next step will be working on substituting the OA with a hydrophilic ligand. As these ligands effect the solubility of the nanoparticle as well as the pH that the nanoparticles can dissolve in, further research is needed to verify which ligand is best suited for preparing ZGGO for bio-imaging.Keywords: biphasic hydrothermal synthesis, crystallinity, oleic acid, zinc gallogermanate
Procedia PDF Downloads 133368 Palladium/Platinum Complexes of Tridentate 4-Acylpyrazolone Thiosemicarbazone with Antioxidant Properties
Authors: Omoruyi G. Idemudia, Alexander P. Sadimenko
Abstract:
The need for the development of new sustainable bioactive compounds with unique properties that can become potential replacement for commonly used medicinal drugs has continued to gain tremendous research concerns because of the problems of disease resistant to these medicinal drugs and their toxicity effects. NOS-donor heterocycles are particularly of interest as they have showed good pharmacological activities in the midst of their interesting chelating properties towards metal ions, an important characteristic for transition metal based drugs design. These new compounds have also gained application as dye sensitizers in solar cell panels for the generation of renewable solar energy, as greener water purification polymer for supply and management of clean water and as catalysts which are used to reduce the amount of pollutants from industrial reaction processes amongst others, because of their versatile properties. Di-ketone acylpyrazolones and their azomethine schiff bases have been employed as pharmaceuticals as well as analytical reagents, and their application as transition metal complexes have being well established. In this research work, a new 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one-thiosemicarbazone was synthesized from the reaction of 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one and thiosemicarbazide in methanol. The pure isolate of the thiosemicarbazone was further reacted with aqueous solutions of palladium and platinum salts to obtain their metal complexes, in an effort towards the discovery of transition metal based synthetic drugs. These compounds were characterized by means of analytical, spectroscopic, thermogravimetric analysis TGA, as well as x-ray crystallography. 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one thiosemicarbazone crystallizes in a triclinic crystal system with a P-1 (No. 2) space group according to x-ray crystallography. The tridentate NOS ligand formed a tetrahedral geometry on coordinating with metal ions. Reported compounds showed varying antioxidant free radical scavenging activities against 2, 2-diphenyl-1-picrylhydrazyl DPPH radical at 100, 200, 300, 400 and 500 µg/ml concentrations. The platinum complex have shown a very good antioxidant property against DPPH with an IC50 of 76.03 µg/ml compared with standard ascorbic acid (IC50 of 74.66 µg/ml) and as such have been identified as a potential anticancer candidate.Keywords: acylpyrazolone, free radical scavenging activities, tridentate ligand, x-ray crystallography
Procedia PDF Downloads 185367 Private and Public Health Sector Difference on Client Satisfaction: Results from Secondary Data Analysis in Sindh, Pakistan
Authors: Wajiha Javed, Arsalan Jabbar, Nelofer Mehboob, Muhammad Tafseer, Zahid Memon
Abstract:
Introduction: Researchers globally have strived to explore diverse factors that augment the continuation and uptake of family planning methods. Clients’ satisfaction is one of the core determinants facilitating continuation of family planning methods. There is a major debate yet scanty evidence to contrast public and private sectors with respect to client satisfaction. The objective of this study is to compare quality-of-care provided by public and private sectors of Pakistan through a client satisfaction lens. Methods: We used Pakistan Demographic Heath Survey 2012-13 dataset (Sindh province) on a total of 3133 Married Women of Reproductive Age (MWRA) aged 15-49 years. Source of family planning (public/private sector) was the main exposure variable. Outcome variable was client satisfaction judged by ten different dimensions of client satisfaction. Means and standard deviations were calculated for continuous variable while for categorical variable frequencies and percentages were computed. For univariate analysis, Chi-square/Fisher Exact test was used to find an association between clients’ satisfaction in public and private sectors. Ten different multivariate models were made. Variables were checked for multi-collinearity, confounding, and interaction, and then advanced logistic regression was used to explore the relationship between client satisfaction and dependent outcome after adjusting for all known confounding factors and results are presented as OR and AOR (95% CI). Results: Multivariate analyses showed that clients were less satisfied in contraceptive provision from private sector as compared to public sector (AOR 0.92,95% CI 0.63-1.68) even though the result was not statistically significant. Clients were more satisfied from private sector as compared to the public sector with respect to other determinants of quality-of-care (follow-up care (AOR 3.29, 95% CI 1.95-5.55), infection prevention (AOR 2.41, 95% CI 1.60-3.62), counseling services (AOR 2.01, 95% CI 1.27-3.18, timely treatment (AOR 3.37, 95% CI 2.20-5.15), attitude of staff (AOR 2.23, 95% CI 1.50-3.33), punctuality of staff (AOR 2.28, 95% CI 1.92-4.13), timely referring (AOR 2.34, 95% CI 1.63-3.35), staff cooperation (AOR 1.75, 95% CI 1.22-2.51) and complications handling (AOR 2.27, 95% CI 1.56-3.29).Keywords: client satisfaction, family planning, public private partnership, quality of care
Procedia PDF Downloads 419366 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation
Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla
Abstract:
Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol
Procedia PDF Downloads 340365 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems
Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos
Abstract:
As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model
Procedia PDF Downloads 158364 Development and Evaluation of a Calcium Rich Plant-Based Supplement on Bone Turnover of Peri and Post Menopausal Women
Authors: Gayathri.G, Hemamalini.A.J, Chandrasekaran.A
Abstract:
Problem statement: Nutritional deficiency, especially calcium, may lead to poor bone formation and mineralization. Although there are plenty of synthetic supplements available, it is essential to make a calcium rich food supplement accessible to combat calcium deficiency that could be readily prepared at the household level. Thus the current study aimed to formulate and standardize an indigenous low-cost calcium-rich food supplement and to study the impact of supplementation on the bone resorption and formation markers. Methods: A Randomized controlled trial was conducted with 60 subjects distributed equally in control and experimental groups, including perimenopausal and postmenopausal women. A plant-based calcium-rich product was developed and supplemented in form of balls as a midmorning and evening snack by addition of optimized proportions of leaves of Sesbania Grandiflora, seeds of Sesamum indicum, Eleusine coracana, Glycine max, Vigna mungo for a period of 6 months. Postmenopausal and perimenopausal women received 1200mg and 800mg of calcium per day from the supplemented, respectively. Outcome measures like serum calcium; betacrosslaps (bone resorption marker) and total P1NP (bone absorption marker) were assessed after 3 months and after 6 months. Results: There were no significant changes seen in the serum calcium and total P1NP levels (bone formation marker) among the subjects during the supplementation period. The bone resorption marker (betacrosslaps) reduced in all the groups and the reduction (0.32 ± 0.130 ng/ml to 0.25 ± 0.130 ng/ml) was found to be statistically highly significant (p < 0.01) in experimental group of perimenopausal subjects and significant (p < 0.05) in experimental group of postmenopausal subjects (1.11 ± 0.290 ng/ml to 0.42 ± 0.263 ng/ml). Conclusion: With the current severe calcium deficiency in the Indian population, integrating low-cost, calcium-rich native foods that could be readily prepared at household level would be useful in raising the nutritional consumption of calcium, which would, in turn, decrease bone turnover.Keywords: calcium, sesbania grandiflora, sesamum indicum, eleusine coracana, glycine max, vigna mungo, postmenopause, perimenopause, bone resorption, bone absorption, betacrosslaps, total P1NP
Procedia PDF Downloads 133363 Management as a Proxy for Firm Quality
Authors: Petar Dobrev
Abstract:
There is no agreed-upon definition of firm quality. While profitability and stock performance often qualify as popular proxies of quality, in this project, we aim to identify quality without relying on a firm’s financial statements or stock returns as selection criteria. Instead, we use firm-level data on management practices across small to medium-sized U.S. manufacturing firms from the World Management Survey (WMS) to measure firm quality. Each firm in the WMS dataset is assigned a mean management score from 0 to 5, with higher scores identifying better-managed firms. This management score serves as our proxy for firm quality and is the sole criteria we use to separate firms into portfolios comprised of high-quality and low-quality firms. We define high-quality (low-quality) firms as those firms with a management score of one standard deviation above (below) the mean. To study whether this proxy for firm quality can identify better-performing firms, we link this data to Compustat and The Center for Research in Security Prices (CRSP) to obtain firm-level data on financial performance and monthly stock returns, respectively. We find that from 1999 to 2019 (our sample data period), firms in the high-quality portfolio are consistently more profitable — higher operating profitability and return on equity compared to low-quality firms. In addition, high-quality firms also exhibit a lower risk of bankruptcy — a higher Altman Z-score. Next, we test whether the stocks of the firms in the high-quality portfolio earn superior risk-adjusted excess returns. We regress the monthly excess returns on each portfolio on the Fama-French 3-factor, 4-factor, and 5-factor models, the betting-against-beta factor, and the quality-minus-junk factor. We find no statistically significant differences in excess returns between both portfolios, suggesting that stocks of high-quality (well managed) firms do not earn superior risk-adjusted returns compared to low-quality (poorly managed) firms. In short, our proxy for firm quality, the WMS management score, can identify firms with superior financial performance (higher profitability and reduced risk of bankruptcy). However, our management proxy cannot identify stocks that earn superior risk-adjusted returns, suggesting no statistically significant relationship between managerial quality and stock performance.Keywords: excess stock returns, management, profitability, quality
Procedia PDF Downloads 93362 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery
Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene
Abstract:
Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.Keywords: multi-objective, analysis, data flow, freight delivery, methodology
Procedia PDF Downloads 180361 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium
Authors: Janne Engblom, Elias Oikarinen
Abstract:
The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.Keywords: dynamic model, panel data, cross-sectional dependence, interaction model
Procedia PDF Downloads 251360 The Contribution of Boards to Company Performance via Strategic Management
Authors: Peter Crow
Abstract:
Boards and directors have been subjects of much scholarly research and public interest over several decades, more so since the succession of high profile company failures of the early 2000s. An array of research outputs including information, correlations, descriptions, models, hypotheses and theories have been reported. While some of this research has shed light on aspects of the board–performance relationship and on board tasks and behaviours, the nature and characteristics of the supposed board–performance relationship remain undetermined. That satisfactory explanations of how boards influence company performance have yet to emerge is a significant blind spot. Yet the board is ultimately responsible for company performance, in accordance with the wishes of shareholders. The aim of this paper is to explore corporate governance and board practice through the lens of strategic management, and to take tentative steps towards a new conception of corporate governance. The findings of a recent longitudinal multiple-case study designed to explore the board’s involvement in strategic management are reported. Qualitative and quantitative data was collected from two quasi-public large companies in New Zealand including from first-hand observations of boards in session, semi-structured interviews with chief executives and chairmen and the inspection of company and board documentation. A synthetic timeline framework was used to collate the financial, board structure, board activity and decision-making data, in order to provide a holistic perspective. Decision sequences were identified, and realist techniques of abduction and retroduction were iteratively applied to analyse the multi-year data set. Using several models previously proposed in the literature as a guide, conjectures were formed, tested and refined—the culmination of which was a provisional model of how boards can influence performance via strategic management. The model builds on both existing theoretical perspectives and theoretical models proposed in the corporate governance and strategic management literature. This paper seeks to add to the understanding of how boards can make meaningful contributions to value creation via strategic management, and to comment on the qualities of directors, social interactions in boardrooms and other circumstances within which influence might be possible given the highly contingent relationship between board activity and business performance outcomes.Keywords: board practice, case study, corporate governance, strategic management
Procedia PDF Downloads 226359 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 155358 Prednisone and Its Active Metabolite Prednisolone Attenuate Lipid Accumulation in Macrophages
Authors: H. Jeries, N. Volkova, C. G. Iglesias, M. Najjar, M. Rosenblat, M. Aviram, T. Hayek
Abstract:
Background: Synthetic forms of glucocorticoids (e.g., prednisone, prednisolone) are anti-inflammatory drugs which are widely used in clinical practice. The role of glucocorticoids (GCs) in cardiovascular diseases including atherosclerosis is highly controversial, and their impact on macrophage foam cell formation is still unknown. Our aim was to investigate the effects of prednisone or its active metabolite, prednisolone, on macrophage oxidative stress and lipid metabolism using in-vivo, ex-vivo and in-vitro systems. Methods: The in-vivo study included C57BL/6 mice which were intraperitoneally injected with prednisone or prednisolone (5mg/kg) for 4 weeks, followed by lipid metabolism analyses in the mice aorta, and in peritoneal macrophages (MPM). In the ex-vivo study, we analyzed the effect of serum samples obtained from 9 healthy volunteers before or after treatment with oral prednisone (20mg for 5 days), on J774A.1 macrophage atherogenicity. In-vitro studies were conducted using J774A.1 macrophages, human monocyte derived macrophages (HMDM) and fibroblasts. Cells were incubated with increasing concentrations (0-200 ng/ml) of prednisone or prednisolone, followed by determination of cellular oxidative status, triglyceride and cholesterol metabolism. Results: Prednisone or prednisolone treatment resulted in a significant reduction in triglycerides and mainly in cholesterol cellular accumulation in MPM or in J774A.1 macrophages incubated with human serum. Similar resulted were noted in HMDM or in J774A.1 macrophages which were directly incubated with the GCs. These effects were associated with GCs inhibitory effect on triglycerides and cholesterol biosynthesis rates, throughout downregulation of diacylglycerol acyltransferase1 (DGAT1) expression, and of the sterol regulatory element binding protein (SREBP2) and HMGCR expression, respectively. In parallel to prednisone or prednisolone induced reduction in macrophage triglyceride content, paraoxonase 2 (PON2) expression was significantly upregulated. GCs-induced reduction of cellular triglyceride and cholesterol mass was mediated by the GCs receptors on macrophages since the GCs receptor antagonist (RU 486) abolished these effects. In fibroblasts, unlike macrophages, prednisone or prednisolone showed no anti-atherogenic effects. Conclusions: Prednisone or prednisolone are anti-atherogenic since they protected macrophages from lipid accumulation and foam cell formation.Keywords: atherosclerosis, cholesterol, foam cell, macrophage, prednisone, prednisolone, triglycerides
Procedia PDF Downloads 145357 Characterizing Nasal Microbiota in COVID-19 Patients: Insights from Nanopore Technology and Comparative Analysis
Authors: David Pinzauti, Simon De Jaegher, Maria D'Aguano, Manuele Biazzo
Abstract:
The COVID-19 pandemic has left an indelible mark on global health, leading to a pressing need for understanding the intricate interactions between the virus and the human microbiome. This study focuses on characterizing the nasal microbiota of patients affected by COVID-19, with a specific emphasis on the comparison with unaffected individuals, to shed light on the crucial role of the microbiome in the development of this viral disease. To achieve this objective, Nanopore technology was employed to analyze the bacterial 16s rRNA full-length gene present in nasal swabs collected in Malta between January 2021 and August 2022. A comprehensive dataset consisting of 268 samples (126 SARS-negative samples and 142 SARS-positive samples) was subjected to a comparative analysis using an in-house, custom pipeline. The findings from this study revealed that individuals affected by COVID-19 possess a nasal microbiota that is significantly less diverse, as evidenced by lower α diversity, and is characterized by distinct microbial communities compared to unaffected individuals. The beta diversity analyses were carried out at different taxonomic resolutions. At the phylum level, Bacteroidota was found to be more prevalent in SARS-negative samples, suggesting a potential decrease during the course of viral infection. At the species level, the identification of several specific biomarkers further underscores the critical role of the nasal microbiota in COVID-19 pathogenesis. Notably, species such as Finegoldia magna, Moraxella catarrhalis, and others exhibited relative abundance in SARS-positive samples, potentially serving as significant indicators of the disease. This study presents valuable insights into the relationship between COVID-19 and the nasal microbiota. The identification of distinct microbial communities and potential biomarkers associated with the disease offers promising avenues for further research and therapeutic interventions aimed at enhancing public health outcomes in the context of COVID-19.Keywords: COVID-19, nasal microbiota, nanopore technology, 16s rRNA gene, biomarkers
Procedia PDF Downloads 68356 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment
Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad
Abstract:
The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility
Procedia PDF Downloads 131355 Rhizospheric Oxygen Release of Hydroponically Grown Wetland Macrophytes as Passive Source for Cathodic Reduction in Microbial Fuel Cell
Authors: Chabungbam Niranjit Khuman, Makarand Madhao Ghangrekar, Arunabha Mitra
Abstract:
The cost of aeration is one of the limiting factors in the upscaling of microbial fuel cells (MFC) for field-scale applications. Wetland macrophytes have the ability to release oxygen into the water to maintain aerobic conditions in their root zone. In this experiment, the efficacy of rhizospheric oxygen release of wetland macrophytes as a source of oxygen in the cathodic chamber of MFC was conducted. The experiment was conducted in an MFC consisting of a three-liter anodic chamber made of ceramic cylinder and a 27 L cathodic chamber. Untreated carbon felts were used as electrodes (i.e., anode and cathode) and connected to an external load of 100 Ω using stainless steel wire. Wetland macrophytes (Canna indica) were grown in the cathodic chamber of the MFC in a hydroponic fashion using a styrofoam sheet (termed as macrophytes assisted-microbial fuel cell, M-MFC). The catholyte (i.e., water) in the M-MFC had negligible contact with atmospheric air due to the styrofoam sheet used for maintaining the hydroponic condition. There was no mixing of the catholyte in the M-MFC. Sucrose based synthetic wastewater having chemical oxygen demand (COD) of 3000 mg/L was fed into the anodic chamber of the MFC in fed-batch mode with a liquid retention time of four days. The C. indica thrived well throughout the duration of the experiment without much care. The average dissolved oxygen (DO) concentration and pH value in the M-MFC were 3.25 mg/L and 7.07, respectively, in the catholyte. Since the catholyte was not in contact with air, the DO in the catholyte might be considered as solely liberated from the rhizospheric oxygen release of C. indica. The maximum COD removal efficiency of M-MFC observed during the experiment was 76.9%. The inadequacy of terminal electron acceptor in the cathodic chamber in M-MFC might have hampered the electron transfer, which in turn, led to slower specific microbial activity, thereby resulting in lower COD removal efficiency than the traditional MFC with aerated catholyte. The average operating voltage (OV) and open-circuit voltage (OCV) of 294 mV and 594 mV, respectively, were observed in M-MFC. The maximum power density observed during polarization was 381 mW/m³, and the maximum sustainable power density observed during the experiment was 397 mW/m³ in M-MFC. The maximum normalized energy recovery and coulombic efficiency of 38.09 Wh/m³ and 1.27%, respectively, were observed. Therefore, it was evidenced that rhizospheric oxygen release of wetland macrophytes (C. indica) was capable of sustaining the cathodic reaction in MFC for field-scale applications.Keywords: hydroponic, microbial fuel cell, rhizospheric oxygen release, wetland macrophytes
Procedia PDF Downloads 132354 High-Throughput Artificial Guide RNA Sequence Design for Type I, II and III CRISPR/Cas-Mediated Genome Editing
Authors: Farahnaz Sadat Golestan Hashemi, Mohd Razi Ismail, Mohd Y. Rafii
Abstract:
A huge revolution has emerged in genome engineering by the discovery of CRISPR (clustered regularly interspaced palindromic repeats) and CRISPR-associated system genes (Cas) in bacteria. The function of type II Streptococcus pyogenes (Sp) CRISPR/Cas9 system has been confirmed in various species. Other S. thermophilus (St) CRISPR-Cas systems, CRISPR1-Cas and CRISPR3-Cas, have been also reported for preventing phage infection. The CRISPR1-Cas system interferes by cleaving foreign dsDNA entering the cell in a length-specific and orientation-dependant manner. The S. thermophilus CRISPR3-Cas system also acts by cleaving phage dsDNA genomes at the same specific position inside the targeted protospacer as observed in the CRISPR1-Cas system. It is worth mentioning, for the effective DNA cleavage activity, RNA-guided Cas9 orthologs require their own specific PAM (protospacer adjacent motif) sequences. Activity levels are based on the sequence of the protospacer and specific combinations of favorable PAM bases. Therefore, based on the specific length and sequence of PAM followed by a constant length of target site for the three orthogonals of Cas9 protein, a well-organized procedure will be required for high-throughput and accurate mining of possible target sites in a large genomic dataset. Consequently, we created a reliable procedure to explore potential gRNA sequences for type I (Streptococcus thermophiles), II (Streptococcus pyogenes), and III (Streptococcus thermophiles) CRISPR/Cas systems. To mine CRISPR target sites, four different searching modes of sgRNA binding to target DNA strand were applied. These searching modes are as follows: i) coding strand searching, ii) anti-coding strand searching, iii) both strand searching, and iv) paired-gRNA searching. The output of such procedure highlights the power of comparative genome mining for different CRISPR/Cas systems. This could yield a repertoire of Cas9 variants with expanded capabilities of gRNA design, and will pave the way for further advance genome and epigenome engineering.Keywords: CRISPR/Cas systems, gRNA mining, Streptococcus pyogenes, Streptococcus thermophiles
Procedia PDF Downloads 257353 Analysis of Extreme Rainfall Trends in Central Italy
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Marco Cifrodelli, Corrado Corradini
Abstract:
The trend of magnitude and frequency of extreme rainfalls seems to be different depending on the investigated area of the world. In this work, the impact of climate change on extreme rainfalls in Umbria, an inland region of central Italy, is examined using data recorded during the period 1921-2015 by 10 representative rain gauge stations. The study area is characterized by a complex orography, with altitude ranging from 200 to more than 2000 m asl. The climate is very different from zone to zone, with mean annual rainfall ranging from 650 to 1450 mm and mean annual air temperature from 3.3 to 14.2°C. Over the past 15 years, this region has been affected by four significant droughts as well as by six dangerous flood events, all with very large impact in economic terms. A least-squares linear trend analysis of annual maximums over 60 time series selected considering 6 different durations (1 h, 3 h, 6 h, 12 h, 24 h, 48 h) showed about 50% of positive and 50% of negative cases. For the same time series the non-parametrical Mann-Kendall test with a significance level 0.05 evidenced only 3% of cases characterized by a negative trend and no positive case. Further investigations have also demonstrated that the variance and covariance of each time series can be considered almost stationary. Therefore, the analysis on the magnitude of extreme rainfalls supplies the indication that an evident trend in the change of values in the Umbria region does not exist. However, also the frequency of rainfall events, with particularly high rainfall depths values, occurred during a fixed period has also to be considered. For all selected stations the 2-day rainfall events that exceed 50 mm were counted for each year, starting from the first monitored year to the end of 2015. Also, this analysis did not show predominant trends. Specifically, for all selected rain gauge stations the annual number of 2-day rainfall events that exceed the threshold value (50 mm) was slowly decreasing in time, while the annual cumulated rainfall depths corresponding to the same events evidenced trends that were not statistically significant. Overall, by using a wide available dataset and adopting simple methods, the influence of climate change on the heavy rainfalls in the Umbria region is not detected.Keywords: climate changes, rainfall extremes, rainfall magnitude and frequency, central Italy
Procedia PDF Downloads 236352 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 266351 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs
Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili
Abstract:
OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.Keywords: LWD measurements, caliper log, correlations, analysis
Procedia PDF Downloads 121