Search results for: specific methanogenic activity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13220

Search results for: specific methanogenic activity

230 Identification of Genomic Mutations in Prostate Cancer and Cancer Stem Cells By Single Cell RNAseq Analysis

Authors: Wen-Yang Hu, Ranli Lu, Mark Maienschein-Cline, Danping Hu, Larisa Nonn, Toshi Shioda, Gail S. Prins

Abstract:

Background: Genetic mutations are highly associated with increased prostate cancer risk. In addition to whole genome sequencing, somatic mutations can be identified by aligning transcriptome sequences to the human genome. Here we analyzed bulk RNAseq and single cell RNAseq data of human prostate cancer cells and their matched non-cancer cells in benign regions from 4 individual patients. Methods: Sequencing raw reads were aligned to the reference genome hg38 using STAR. Variants were annotated using Annovar with respect to overlap gene annotation information, effect on gene and protein sequence, and SIFT annotation of nonsynonymous variant effect. We determined cancer-specific novel alleles by comparing variant calls in cancer cells to matched benign cells from the same individual by selecting unique alleles that were only detected in the cancer samples. Results: In bulk RNAseq data from 3 patients, the most common variants were the noncoding mutations at UTR3/UTR5, and the major variant types were single-nucleotide polymorphisms (SNP) including frameshift mutations. C>T transversion is the most frequently presented substitution of SNP. A total of 222 genes carrying unique exonic or UTR variants were revealed in cancer cells across 3 patients but not in benign cells. Among them, transcriptome levels of 7 genes (CITED2, YOD1, MCM4, HNRNPA2B1, KIF20B, DPYSL2, NR4A1) were significantly up or down regulated in cancer stem cells. Out of the 222 commonly mutated genes in cancer, 19 have nonsynonymous variants and 11 are damaged genes with variants including SIFT, frameshifts, stop gain/loss, and insertions/deletions (indels). Two damaged genes, activating transcription factor 6 (ATF6) and histone demethylase KDM3A are of particular interest; the former is a survival factor for certain cancer cells while the later positively activates androgen receptor target genes in prostate cancer. Further, single cell RNAseq data of cancer cells and their matched non-cancer benign cells from both primary 2D and 3D tumoroid cultures were analyzed. Similar to the bulk RNAseq data, single cell RNAseq in cancer demonstrated that the exonic mutations are less common than noncoding variants, with SNPs including frameshift mutations the most frequently presented types in cancer. Compared to cancer stem cell enriched-3D tumoroids, 2D cancer cells carried 3-times higher variants, 8-times more coding mutations and 10-times more nonsynonymous SNP. Finally, in both 2D primary and 3D tumoroid cultures, cancer stem cells exhibited fewer coding mutations and noncoding SNP or insertions/deletions than non-stem cancer cells. Summary: Our study demonstrates the usefulness of bulk and single cell RNAseaq data in identifying somatic mutations in prostate cancer, providing an alternative method in screening candidate genes for prostate cancer diagnosis and potential therapeutic targets. Cancer stem cells carry fewer somatic mutations than non-stem cancer cells due to their inherited immortal stand DNA from parental stem cells that explains their long-lived characteristics.

Keywords: prostate cancer, stem cell, genomic mutation, RNAseq

Procedia PDF Downloads 16
229 Radioprotective Effects of Super-Paramagnetic Iron Oxide Nanoparticles Used as Magnetic Resonance Imaging Contrast Agent for Magnetic Resonance Imaging-Guided Radiotherapy

Authors: Michael R. Shurin, Galina Shurin, Vladimir A. Kirichenko

Abstract:

Background. Visibility of hepatic malignancies is poor on non-contrast imaging for daily verification of liver malignancies prior to radiation therapy on MRI-guided Linear Accelerators (MR-Linac). Ferumoxytol® (Feraheme, AMAG Pharmaceuticals, Waltham, MA) is a SPION agent that is increasingly utilized off-label as hepatic MRI contrast. This agent has the advantage of providing a functional assessment of the liver based upon its uptake by hepatic Kupffer cells proportionate to vascular perfusion, resulting in strong T1, T2 and T2* relaxation effects and enhanced contrast of malignant tumors, which lack Kupffer cells. The latter characteristic has been recently utilized for MRI-guided radiotherapy planning with precision targeting of liver malignancies. However potential radiotoxicity of SPION has never been addressed for its safe use as an MRI-contrast agent during liver radiotherapy on MRI-Linac. This study defines the radiomodulating properties of SPIONs in vitro on human monocyte and macrophage cell lines exposed to 60Go gamma-rays within clinical radiotherapy dose range. Methods. Human monocyte and macrophages cell line in cultures were loaded with a clinically relevant concentration of Ferumoxytol (30µg/ml) for 2 and 24 h and irradiated to 3Gy, 5Gy and 10Gy. Cells were washed and cultured for additional 24 and 48 h prior to assessing their phenotypic activation by flow cytometry and function, including viability (Annexin V/PI assay), proliferation (MTT assay) and cytokine expression (Luminex assay). Results. Our results reveled that SPION affected both human monocytes and macrophages in vitro. Specifically, iron oxide nanoparticles decreased radiation-induced apoptosis and prevented radiation-induced inhibition of human monocyte proliferative activity. Furthermore, Ferumoxytol protected monocytes from radiation-induced modulation of phenotype. For instance, while irradiation decreased polarization of monocytes to CD11b+CD14+ and CD11bnegCD14neg phenotype, Ferumoxytol prevented these effects. In macrophages, Ferumoxytol counteracted the ability of radiation to up-regulate cell polarization to CD11b+CD14+ phenotype and prevented radiation-induced down-regulation of expression of HLA-DR and CD86 molecules. Finally, Ferumoxytol uptake by human monocytes down-regulated expression of pro-inflammatory chemokines MIP-1α (Macrophage inflammatory protein 1α), MIP-1β (CCL4) and RANTES (CCL5). In macrophages, Ferumoxytol reversed the expression of IL-1RA, IL-8, IP-10 (CXCL10) and TNF-α, and up-regulates expression of MCP-1 (CCL2) and MIP-1α in irradiated macrophages. Conclusion. SPION agent Ferumoxytol increases resistance of human monocytes to radiation-induced cell death in vitro and supports anti-inflammatory phenotype of human macrophages under radiation. The effect is radiation dose-dependent and depends on the duration of Feraheme uptake. This study also finds strong evidence that SPIONs reversed the effect of radiation on the expression of pro-inflammatory cytokines involved in initiation and development of radiation-induced liver damage. Correlative translational work at our institution will directly assess the cyto-protective effects of Ferumoxytol on human Kupfer cells in vitro and ex vivo analysis of explanted liver specimens in a subset of patients receiving Feraheme-enhanced MRI-guided radiotherapy to the primary liver tumors as a bridge to liver transplant.

Keywords: superparamagnetic iron oxide nanoparticles, radioprotection, magnetic resonance imaging, liver

Procedia PDF Downloads 71
228 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix

Authors: Natia Jalagonia, Tinatin Kuchukhidze

Abstract:

Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculated

Keywords: synthesis, PMHS, membrane, electrolyte

Procedia PDF Downloads 256
227 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface

Authors: Renata Gerhardt, Detlev Belder

Abstract:

Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.

Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS

Procedia PDF Downloads 243
226 Research Project of National Interest (PRIN-PNRR) DIVAS: Developing Methods to Assess Tree Vitality after a Wildfire through Analyses of Cambium Sugar Metabolism

Authors: Claudia Cocozza, Niccolò Frassinelli, Enrico Marchi, Cristiano Foderi, Alessandro Bizzarri, Margherita Paladini, Maria Laura Traversi, Eleftherious Touloupakis, Alessio Giovannelli

Abstract:

The development of tools to quickly identify the fate of injured trees after stress is highly relevant when biodiversity restoration of damaged sites is based on nature-based solutions. In this context, an approach to assess irreversible physiological damages within trees could help to support planning management decisions of perturbed sites to restore biodiversity, for the safety of the environment and understanding functionality adjustments of the ecosystems. Tree vitality can be estimated by a series of physiological proxies like cambium activity, starch, and soluble sugars amount in C-sinks whilst the accumulation of ethanol within the cambial cells and phloem is considered an alert of cell death. However, their determination requires time-consuming laboratory protocols, which makes the approach unfeasible as a practical option in the field. The project aims to develop biosensors to assess the concentration of soluble sugars and ethanol in stem tissues. Soluble sugars and ethanol concentrations will be used to define injured trees to discriminate compromised and recovering trees in the forest directly. To reach this goal, we select study sites subjected to prescribed fires or recent wildfires as experimental set-ups. Indeed, in Mediterranean countries, forest fire is a recurrent event that must be considered as a central component of regional and global strategies in forest management and biodiversity restoration programs. A biosensor will be developed through a multistep process related to target analytes characterization, bioreceptor selection, and, finally, calibration/testing of the sensor. To validate biosensor signals, soluble sugars and ethanol will be quantified by HPLC and GC using synthetic media (in lab) and phloem sap (in field) whilst cambium vitality will be assessed by anatomical observations. On burnt trees, the stem growth will be monitored by dendrometers and/or estimated by tree ring analyses, whilst the tree response to past fire events will be assessed by isotopic discrimination. Moreover, the fire characterization and the visual assessment procedure will be used to assign burnt trees to a vitality class. At the end of the project, a well-defined procedure combining biosensor signal and visual assessment will be produced and applied to a study case. The project outcomes and the results obtained will be properly packaged to reach, engage and address the needs of the final users and widely shared with relevant stakeholders involved in the optimal use of biosensors and in the management of post-fire areas. This project was funded by National Recovery and Resilience Plan (NRRP), Mission 4, Component C2, Investment 1.1 - Call for tender No. 1409 of 14 September 2022 – ‘Progetti di Ricerca di Rilevante interesse Nazionale – PRIN’ of Italian Ministry of University and Research funded by the European Union – NextGenerationEU; Grant N° P2022Z5742, CUP B53D23023780001.

Keywords: phloem, scorched crown, conifers, prescribed burning, biosensors

Procedia PDF Downloads 15
225 Understanding Beginning Writers' Narrative Writing with a Multidimensional Assessment Approach

Authors: Huijing Wen, Daibao Guo

Abstract:

Writing is thought to be the most complex facet of language arts. Assessing writing is difficult and subjective, and there are few scientifically validated assessments exist. Research has proposed evaluating writing using a multidimensional approach, including both qualitative and quantitative measures of handwriting, spelling and prose. Given that narrative writing has historically been a staple of literacy instruction in primary grades and is one of the three major genres Common Core State Standards required students to acquire starting in kindergarten, it is essential for teachers to understand how to measure beginning writers writing development and sources of writing difficulties through narrative writing. Guided by the theoretical models of early written expression and using empirical data, this study examines ways teachers can enact a comprehensive approach to understanding beginning writer’s narrative writing through three writing rubrics developed for a Curriculum-based Measurement (CBM). The goal is to help classroom teachers structure a framework for assessing early writing in primary classrooms. Participants in this study included 380 first-grade students from 50 classrooms in 13 schools in three school districts in a Mid-Atlantic state. Three writing tests were used to assess first graders’ writing skills in relation to both transcription (i.e., handwriting fluency and spelling tests) and translational skills (i.e., a narrative prompt). First graders were asked to respond to a narrative prompt in 20 minutes. Grounded in theoretical models of earlier expression and empirical evidence of key contributors to early writing, all written samples to the narrative prompt were coded three ways for different dimensions of writing: length, quality, and genre elements. To measure the quality of the narrative writing, a traditional holistic rating rubric was developed by the researchers based on the CCSS and the general traits of good writing. Students' genre knowledge was measured by using a separate analytic rubric for narrative writing. Findings showed that first-graders had emerging and limited transcriptional and translational skills with a nascent knowledge of genre conventions. The findings of the study provided support for the Not-So-Simple View of Writing in that fluent written expression, measured by length and other important linguistic resources measured by the overall quality and genre knowledge rubrics, are fundamental in early writing development. Our study echoed previous research findings on children's narrative development. The study has practical classroom application as it informs writing instruction and assessment. It offered practical guidelines for classroom instruction by providing teachers with a better understanding of first graders' narrative writing skills and knowledge of genre conventions. Understanding students’ narrative writing provides teachers with more insights into specific strategies students might use during writing and their understanding of good narrative writing. Additionally, it is important for teachers to differentiate writing instruction given the individual differences shown by our multiple writing measures. Overall, the study shed light on beginning writers’ narrative writing, indicating the complexity of early writing development.

Keywords: writing assessment, early writing, beginning writers, transcriptional skills, translational skills, primary grades, simple view of writing, writing rubrics, curriculum-based measurement

Procedia PDF Downloads 73
224 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 70
223 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours

Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal

Abstract:

Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.

Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography

Procedia PDF Downloads 78
222 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 216
221 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 47
220 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain

Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha

Abstract:

Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.

Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited

Procedia PDF Downloads 294
219 The Role of Creative Works Dissemination Model in EU Copyright Law Modernization

Authors: Tomas Linas Šepetys

Abstract:

In online content-sharing service platforms, the ability of creators to restrict illicit use of audiovisual creative works has effectively been abolished, largely due to specific infrastructure where a huge volume of copyrighted audiovisual content can be made available to the public. The European Union legislator has attempted to strengthen the positions of creators in the realm of online content-sharing services. Article 17 of the new Digital Single Market Directive considers online content-sharing service providers to carry out acts of communication to the public of any creative content uploaded to their platforms by users and posits requirements to obtain licensing agreements. While such regulation intends to assert authors‘ ability to effectively control the dissemination of their creative works, it also creates threats of parody content overblocking through automated content monitoring. Such potentially paradoxical outcome of the efforts of the EU legislator to deliver economic safeguards for the creators in the online content-sharing service platforms leads to presume lack of informity on legislator‘s part regarding creative works‘ economic exploitation opportunities provided to creators in the online content-sharing infrastructure. Analysis conducted in this scientific research discloses that the aforementioned irregularities of parody and other creative content dissemination are caused by EU legislators‘ lack of assessment of value extraction conditions for parody creators in the online content-sharing service platforms. Historical and modeling research method application reveals the existence of two creative content dissemination models and their unique mechanisms of commercial value creation. Obligations to obtain licenses and liability over creative content uploaded to their platforms by users set in Article 17 of the Digital Single Market Directive represent technological replication of the proprietary dissemination model where the creator is able to restrict access to creative content apart from licensed retail channels. The online content-sharing service platforms represent an open dissemination model where the economic potential of creative content is based on the infrastructure of unrestricted access by users and partnership with advertising services offered by the platform. Balanced modeling of proprietary dissemination models in such infrastructure requires not only automated content monitoring measures but also additional regulatory monitoring solutions to separate parody and other types of creative content. An example of the Digital Single Market Directive proves that regulation can dictate not only the technological establishment of a proprietary dissemination model but also a partial reduction of the open dissemination model and cause a disbalance between the economic interests of creators relying on such models. The results of this scientific research conclude an informative role of the creative works dissemination model in the EU copyright law modernization process. A thorough understanding of the commercial prospects of the open dissemination model intrinsic to the online content-sharing service platform structure requires and encourages EU legislators to regulate safeguards for parody content dissemination. Implementing such safeguards would result in a common application of proprietary and open dissemination models in the online content-sharing service platforms and balanced protection of creators‘ economic interests explicitly based on those creative content dissemination models.

Keywords: copyright law, creative works dissemination model, digital single market directive, online content-sharing services

Procedia PDF Downloads 74
218 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth

Authors: Caroline Atef Shoukry Tadros

Abstract:

Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.

Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science

Procedia PDF Downloads 73
217 Assessment of Rooftop Rainwater Harvesting in Gomti Nagar, Lucknow

Authors: Rajkumar Ghosh

Abstract:

Water scarcity is a pressing issue in urban areas, even in smart cities where efficient resource management is a priority. This scarcity is mainly caused by factors such as lifestyle changes, excessive groundwater extraction, over-usage of water, rapid urbanization, and uncontrolled population growth. In the specific case of Gomti Nagar, Lucknow, Uttar Pradesh, India, the depletion of groundwater resources is particularly severe, leading to a water imbalance and posing a significant challenge for the region's sustainable development. The aim of this study is to address the water shortage in the Gomti Nagar region by focusing on the implementation of artificial groundwater recharge methods. Specifically, the research aims to investigate the effectiveness of rainwater collection through rooftop rainwater harvesting systems (RTRWHs) as a sustainable approach to reduce aquifer depletion and bridge the gap between groundwater recharge and extraction. The research methodology for this study involves the utilization of RTRWHs as the main method for collecting rainwater. This approach is considered effective in managing and conserving water resources in a sustainable manner. The focus is on implementing RTRWHs in residential and commercial buildings to maximize the collection of rainwater and its subsequent utilization for various purposes in the Gomti Nagar region. The study reveals that the installation of RTRWHs in the Gomti Nagar region has a positive impact on addressing the water scarcity issue. Currently, RTRWHs cover only a small percentage (0.04%) of the total rainfall collected in the region. However, when RTRWHs are installed in all buildings, their influence on increasing water availability and reducing aquifer depletion will be significantly greater. The study also highlights the significant water imbalance of 24519 ML/yr in the region, emphasizing the urgent need for sustainable water management practices. This research contributes to the theoretical understanding of sustainable water management systems in smart cities. By highlighting the effectiveness of RTRWHs in reducing aquifer depletion, it emphasizes the importance of implementing such systems in urban areas. The findings of this study can serve as a basis for policymakers, urban planners, and developers to prioritize and incentivize the installation of RTRWHs as a potential solution to the water shortage crisis. The data for this study were collected through various sources such as government reports, surveys, and existing groundwater abstraction patterns. The collected data were then analysed to assess the current water situation, groundwater depletion rate, and the potential impact of implementing RTRWHs. Statistical analysis and modelling techniques were employed to quantify the water imbalance and evaluate the effectiveness of RTRWHs. The findings of this study demonstrate that the implementation of RTRWHs can effectively mitigate the water scarcity crisis in Gomti Nagar. By reducing aquifer depletion and bridging the gap between groundwater recharge and extraction, RTRWHs offer a sustainable solution to the region's water scarcity challenges. The study highlights the need for widespread adoption of RTRWHs in all buildings and emphasizes the importance of integrating such systems into the urban planning and development process. By doing so, smart cities like Gomti Nagar can achieve efficient water management, ensuring a better future with improved water availability for its residents.

Keywords: rooftop rainwater harvesting, rainwater, water management, aquifer

Procedia PDF Downloads 94
216 Sustainability Communications Across Multi-Stakeholder Groups: A Critical Review of the Findings from the Hospitality and Tourism Sectors

Authors: Frederica Pettit

Abstract:

Contribution: Stakeholder involvement in CSR is essential to ensuring pro-environmental attitudes and behaviours across multi-stakeholder groups. Despite increased awareness of the benefits surrounding a collaborative approach to sustainability communications, its success is limited by difficulties engaging with active online conversations with stakeholder groups. Whilst previous research defines the effectiveness of sustainability communications; this paper contributes to knowledge through the development of a theoretical framework that explores the processes to achieving pro-environmental attitudes and behaviours in stakeholder groups. The research will also consider social media as an opportunity to communicate CSR information to all stakeholder groups. Approach: A systematic review was chosen to investigate the effectiveness of the types of sustainability communications used in the hospitality and tourism industries. The systematic review was completed using Web of Science and Scopus using the search terms “sustainab* communicat*” “effective or effectiveness,” and “hospitality or tourism,” limiting the results to peer-reviewed research. 133 abstracts were initially read, with articles being excluded for irrelevance, duplicated articles, non-empirical studies, and language. A total of 45 papers were included as part of the systematic review. 5 propositions were created based on the results of the systematic review, helping to develop a theoretical framework of the processes needed for companies to encourage pro-environmental behaviours across multi-stakeholder groups. Results: The theoretical framework developed in the paper determined the processes necessary for companies to achieve pro-environmental behaviours in stakeholders. The processes to achieving pro-environmental attitudes and behaviours are stakeholder-focused, identifying the need for communications to be specific to their targeted audience. Collaborative communications that enable stakeholders to engage with CSR information and provide feedback lead to a higher awareness of CSR shared visions and pro-environmental attitudes and behaviours. These processes should also aim to improve their relationships with stakeholders through transparency of CSR, CSR strategies that match stakeholder values and ethics whilst prioritizing sustainability as part of their job role. Alternatively, companies can prioritize pro-environmental behaviours using choice editing by mainstreaming sustainability as the only option. In recent years, there has been extensive research on social media as a viable source of sustainability communications, with benefits including direct interactions with stakeholders, the ability to enforce the authenticity of CSR activities and encouragement of pro-environmental behaviours. Despite this, there are challenges to implementing CSR, including difficulties controlling stakeholder criticisms, negative stakeholder influences and comments left on social media platforms. Conclusion: A lack of engagement with CSR information is a reoccurring reason for preventing pro-environmental attitudes and behaviours across stakeholder groups. Traditional CSR strategies contribute to this due to their inability to engage with their intended audience. Hospitality and tourism companies are improving stakeholder relationships through collaborative processes which reduce single-use plastic consumption. A collaborative approach to communications can lead to stakeholder satisfaction, leading to changes in attitudes and behaviours. Different sources of communications are accessed by different stakeholder groups, identifying the need for targeted sustainability messaging, creating benefits such as direct interactions with stakeholders, the ability to enforce the authenticity of CSR activities, and encouraging engagement with sustainability information.

Keywords: hospitality, pro-environmental attitudes and behaviours, sustainability communication, social media

Procedia PDF Downloads 137
215 Effect of the Incorporation of Modified Starch on the Physicochemical Properties and Consumer Acceptance of Puff Pastry

Authors: Alejandra Castillo-Arias, Santiago Amézquita-Murcia, Golber Carvajal-Lavi, Carlos M. Zuluaga-Domínguez

Abstract:

The intricate relationship between health and nutrition has driven the food industry to seek healthier and more sustainable alternatives. A key strategy currently employed is the reduction of saturated fats and the incorporation of ingredients that align with new consumer trends. Modified starch, a polysaccharide widely used in baking, also serves as a functional ingredient to boost dietary fiber content. However, its use in puff pastry remains challenging due to the technological difficulties in achieving a buttery pastry with the necessary strength to create thin, flaky layers. This study explored the potential of incorporating modified starch into puff pastry formulations. To evaluate the physicochemical properties of wheat flour mixed with modified starch, five different flour samples were prepared: T1, T2, T3, and T4, containing 10g, 20g, 30g, and 40g of modified starch per 100 g mixture, respectively, alongside a control sample (C) with no added starch. The analysis focused on various physicochemical indices, including the Water Absorption Index (WAI), Water Solubility Index (WSI), Swelling Power (SP), and Water Retention Capacity (WRC). The puff pastry was further characterized by color measurement and sensory analysis. For the preparation of the puff pastry dough, the flour, modified starch, and salt were mixed, followed by the addition of water until a homogenous dough was achieved. The margarine was later incorporated into the dough, which was folded and rolled multiple times to create the characteristic layers of puff pastry. The dough was then cut into equal pieces, baked at 170°C, and allowed to cool. The results indicated that the addition of modified starch did not significantly alter the specific volume or texture of the puff pastries, as reflected by the stable WAI and SP values across the samples. However, the WRC increased with higher starch content, highlighting the hydrophilic nature of the modified starch, which necessitated additional water during dough preparation. Color analysis revealed significant variations in the L* (lightness) and a* (red-green) parameters, with no consistent relationship between the modified starch treatments and the control. However, the b* (yellow-blue) parameter showed a strong correlation across most samples, except for treatment T3. Thus, modified starch affected the a* component of the CIELAB color spectrum, influencing the reddish hue of the puff pastries. Variations in baking time due to increased water content in the dough likely contributed to differences in lightness among the samples. Sensory analysis revealed that consumers preferred the sample with a 20% starch substitution (T2), which was rated similarly to the control in terms of texture. However, treatment T3 exhibited unusual behavior in texture analysis, and the color analysis showed that treatment T1 most closely resembled the control, indicating that starch addition is most noticeable to consumers in the visual aspect of the product. In conclusion, while the modified starch successfully maintained the desired texture and internal structure of puff pastry, its impact on water retention and color requires careful consideration in product formulation. This study underscores the importance of balancing product quality with consumer expectations when incorporating modified starches in baked goods.

Keywords: consumer preferences, modified starch, physicochemical properties, puff pastry

Procedia PDF Downloads 25
214 Carbon Nanotube-Based Catalyst Modification to Improve Proton Exchange Membrane Fuel Cell Interlayer Interactions

Authors: Ling Ai, Ziyu Zhao, Zeyu Zhou, Xiaochen Yang, Heng Zhai, Stuart Holmes

Abstract:

Optimizing the catalyst layer structure is crucial for enhancing the performance of proton exchange membrane fuel cells (PEMFCs) with low Platinum (Pt) loading. Current works focused on the utilization, durability, and site activity of Pt particles on support, and performance enhancement has been achieved by loading Pt onto porous support with different morphology, such as graphene, carbon fiber, and carbon black. Some schemes have also incorporated cost considerations to achieve lower Pt loading. However, the design of the catalyst layer (CL) structure in the membrane electrode assembly (MEA) must consider the interactions between the layers. Addressing the crucial aspects of water management, low contact resistance, and the establishment of effective three-phase boundary for MEA, multi-walled carbon nanotubes (MWCNTs) are promising CL support due to their intrinsically high hydrophobicity, high axial electrical conductivity, and potential for ordered alignment. However, the drawbacks of MWCNTs, such as strong agglomeration, wall surface chemical inertness, and unopened ends, are unfavorable for Pt nanoparticle loading, which is detrimental to MEA processing and leads to inhomogeneous CL surfaces. This further deteriorates the utilization of Pt and increases the contact resistance. Robust chemical oxidation or nitrogen doping can introduce polar functional groups onto the surface of MWCNTs, facilitating the creation of open tube ends and inducing defects in tube walls. This improves dispersibility and load capacity but reduces length and conductivity. Consequently, a trade-off exists between maintaining the intrinsic properties and the degree of functionalization of MWCNTs. In this work, MWCNTs were modified based on the operational requirements of the MEA from the viewpoint of interlayer interactions, including the search for the optimal degree of oxidation, N-doping, and micro-arrangement. MWCNT were functionalized by oxidizing, N-doping, as well as micro-alignment to achieve lower contact resistance between CL and proton exchange membrane (PEM), better hydrophobicity, and enhanced performance. Furthermore, this work expects to construct a more continuously distributed three-phase boundary by aligning MWCNT to form a locally ordered structure, which is essential for the efficient utilization of Pt active sites. Different from other chemical oxidation schemes that used HNO3:H2SO4 (1:3) mixed acid to strongly oxidize MWCNT, this scheme adopted pure HNO3 to partially oxidize MWCNT at a lower reflux temperature (80 ℃) and a shorter treatment time (0 to 10 h) to preserve the morphology and intrinsic conductivity of MWCNT. The maximum power density of 979.81 mw cm-2 was achieved by Pt loading on 6h MWCNT oxidation time (Pt-MWCNT6h). This represented a 59.53% improvement over the commercial Pt/C catalyst of 614.17 (mw cm-2). In addition, due to the stronger electrical conductivity, the charge transfer resistance of Pt-MWCNT6h in the electrochemical impedance spectroscopy (EIS) test was 0.09 Ohm cm-2, which was 48.86% lower than that of Pt/C. This study will discuss the developed catalysts and their efficacy in a working fuel cell system. This research will validate the impact of low-functionalization modification of MWCNTs on the performance of PEMFC, which simplifies the preparation challenges of CL and contributing for the widespread commercial application of PEMFCs on a larger scale.

Keywords: carbon nanotubes, electrocatalyst, membrane electrode assembly, proton exchange membrane fuel cell

Procedia PDF Downloads 68
213 Amyloid Angiopathy and Golf: Two Opposite but Close Worlds

Authors: Andrea Bertocchi, Alessio Barnaba Di Fonzo, Davide Talarico, Simone Rivaroli, Jeff Konin

Abstract:

The patient is a 89 years old male (180cm/85kg) retired notary former golfer with no past medical history. He describes a progressive ideomotor slowdown for 14 months. The disorder is characterized by short-term memory deficits and, for some months, also by unstable walking with a broad base with skidding and risk of falling at directional changes and urinary urgency. There were also episodes of aggression towards his wife and staff. At the time, the patient takes no prescribed medications. He has difficulty eating, dressing, and some problems with personal hygiene. In the initial visit, the patient was alert, cooperating, and performed simple tasks; however, he has a hearing impairment, slowed spontaneous speech, and amnestic deficit to the short story. Ideomotor apraxia is not present. He scored 20 points in the MMSE. From a motor function, he has deficits using Medical Research Council (MRC) 3-/5 in bilateral lower limbs and requires maximum assistance from sit to stand with existing premature fatigue. He’s unable to walk for about 1 month. Tremors and hypertonia are absent. BERG was unable to be administered, and BARTHEL was obtained 45/100. An Amyloid Angiopathy is suspected and then confirmed at the neurological examination. Therehabilitation objectives were the recovery of mobility and reinforcement of the UE/LE, especially legs, for recovery of standing and walking. The cognitive aspect was also an essential factor for the patient's recovery. The literature doesn’t demonstrate any particular studies regarding motor and cognitive rehabilitation on this pathology. Failing to manage his attention on exercise and tending to be disinterested and falling asleep constantly, we used golf-specific gestures to stimulate his mind to work and get results because the patient has memory recall of golf related movement. We worked for 4 months with a frequency of 3 sessions per week. Every session lasted for 45 minutes. After 4 months of work, the patient walked independently with the use of a stick for about 120 meters without stopping. MRC 4/5 AI bilaterally andpostural steps performed independently with supervision. BERG 36/56. BARTHEL 65/100. 6 Minutes Walking Test (6MWT), at the beginning, it wasn’t measurable, now, he performs 151,5m with Numeric Rating Scale 4 at the beginning and 7 at the end. Cognitively, he no longer has episodes of aggression, although the short-term memory and concentration deficit remains. Amyloid Angiopathy is a mix of motor and cognitive disorder. It is worth the thought that cerebral amyloid angiopathy manifests with functional deficits due to strokes and bleedings and, as such, has an important rehabilitation indication, as classical stroke is not associated with amyloidosis. Exploring the motor patterns learned at a young age and remained in the implicit and explicit memory of the patient allowed us to set up effective work and to obtain significant results in the short-middle term. Surely many studies will still be done regarding this pathology and its rehabilitation, but the importance of the cognitive sphere applied to the motor sphere could represent an important starting point.

Keywords: amyloid angiopathy, cognitive rehabilitation, golf, motor disorder

Procedia PDF Downloads 136
212 International Solar Alliance: A Case for Indian Solar Diplomacy

Authors: Swadha Singh

Abstract:

International Solar Alliance is the foremost treaty-based global organization concerned with tapping the potential of sun-abundant nations between the Tropics of Cancer and Capricorn and enables co-operation among them. As a founding member of the International Solar Alliance, India exhibits its positioning as an upcoming leader in clean energy. India has set ambitious goals and targets to expand the share of solar in its energy mix and is playing a proactive role both at the regional and global levels. ISA aims to serve multiple goals- bring about scale commercialization of solar power, boost domestic manufacturing, and leverage solar diplomacy in African countries, amongst others. Against this backdrop, this paper attempts to examine the ways in which ISA as an intergovernmental organization under Indian leadership can leverage the cause of clean energy (solar) diplomacy and effectively shape partnerships and collaborations with other developing countries in terms of sharing solar technology, capacity building, risk mitigation, mobilizing financial investment and providing an aggregate market. A more specific focus of ISA is on the developing countries, which in the absence of a collective, are constrained by technology and capital scarcity, despite being naturally endowed with solar resources. Solar rich but finance-constrained economies face political risk, foreign exchange risk, and off-taker risk. Scholars argue that aligning India’s climate change discourse and growth prospects in its engagements, collaborations, and partnerships at the bilateral, multilateral and regional level can help promote trade, attract investments, and promote resilient energy transition both in India and in partner countries. For developing countries, coming together in an action-oriented way on issues of climate and clean energy is particularly important since it is developing and underdeveloped countries that face multiple and coalescing challenges such as the adverse impact of climate change, uneven and low access to reliable energy, and pressing employment needs. Investing in green recovery is agreed to be an assured way to create resilient value chains, create sustainable livelihoods, and help mitigate climate threats. If India is able to ‘green its growth’ process, it holds the potential to emerge as a climate leader internationally. It can use its experience in the renewable sector to guide other developing countries in balancing multiple similar objectives of development, energy security, and sustainability. The challenges underlying solar expansion in India have lessons to offer other developing countries, giving India an opportunity to assume a leadership role in solar diplomacy and expand its geopolitical influence through inter-governmental organizations such as ISA. It is noted that India has limited capacity to directly provide financial funds and support and is not a leading manufacturer of cheap solar equipment, as does China; however, India can nonetheless leverage its large domestic market to scale up the commercialization of solar power and offer insights and learnings to similarly placed abundant solar countries. The paper examines the potential of and limits placed on India’s solar diplomacy.

Keywords: climate diplomacy, energy security, solar diplomacy, renewable energy

Procedia PDF Downloads 118
211 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 66
210 Foregrounding Events in Modern Sundanese: The Pragmatics of Particle-to-Active Voice Marking Shift

Authors: Rama Munajat

Abstract:

Discourse information levels may be viewed from either a background-foreground distinction or a multi-level perspective, and cross-linguistic studies on this area suggest that each information level is marked by a specific linguistic device. In this sense, Sundanese, spoken in Indonesia’s West Javanese Province, further differentiates the background and foreground information into ordinary and significant types. This paper will report an ongoing shift from particle-to-active voice marking in the way Sundanese signals foregrounding events. The shift relates to decades of contact with Bahasa Indonesia (Indonesia’s official language) and linguistic compatibility between the two surface marking strategies. Representative data analyzed include three groups of short stories in both Sundanese and Bahasa Indonesia (Indonesian) published in three periods: before 1945, 1965-2006, and 2016-2019. In the first group of Sundanese data, forward-moving events dominantly appear in particle KA (Kecap Anteuran, word-accompanying) constructions, where the KA represents different particles that co-occur with a special group of verbs. The second group, however, shows that the foregrounded events are more frequently described in active-voice forms with a subject-predicate (SP) order. Subsequently, the third offers stronger evidence for the use of the SP structure. As for the Indonesian data, the foregrounding events in the first group occur in verb-initial and passive-voice constructions, while in the second and third, the events more frequently appear in active-voice structures (subject-predicate sequence). The marking shift above suggests a structural influence from Indonesian, stemmed from generational differences among authors of the Sundanese short stories, particularly related to their education and language backgrounds. The first group of short stories – published before 1945 or before Indonesia's independence from Dutch – were written by native speakers of Sundanese who spoke Indonesian as a foreign language and went through the Dutch education system. The second group of authors, on the other hand, represents a generation of Sundanese native speakers who spoke Indonesian as a second language. Finally, the third group consists of authors who are bilingual speakers of both Sundanese and Indonesian. The data suggest that the last two groups of authors completed the Indonesian education system. With these, the use of subject-predicate sequences to denote foregrounding events began to appear more frequently in the second group and then became more dominant in those of the third. The coded data also signify that cohesion, coherence, and pragmatic purposes in Particle KA constructions are intact in their respective active-voice structure counterparts. For instance, the foregrounding events in Particle KA constructions occur in Sentence-initial KA and Pre-verbal KA forms, whereas those in the active-voice are described in Subject-Predicate (SP) and Zero-Subject active-voice patterns. Cross-language data further demonstrate that the Sentence-initial KA and the SP active-voice structures each contain an overt noun phrase (NP) co-referential with one of the entities introduced in a preceding context. Similarly, the pre-verbal KA and Zero-Subject active-voice patterns have a deleted noun phrase unambiguously referable to the only one entity previously mentioned. The presence and absence of an NP inform a pragmatic strategy to place prominence on topic/given and comment/new information, respectively.

Keywords: discourse analysis, foregrounding marking, pragmatics, language contact

Procedia PDF Downloads 137
209 Ecological Planning Method of Reclamation Area Based on Ecological Management of Spartina Alterniflora: A Case Study of Xihu Harbor in Xiangshan County

Authors: Dong Yue, Hua Chen

Abstract:

The study region Xihu Harbor in Xiangshan County, Ningbo City is located in the central coast of Zhejiang Province. Concerning the wave dispating issue, Ningbo government firstly introduced Spartina alterniflora in 1980s. In the 1990s, S. alterniflora spread so rapidly thus a ‘grassland’ in the sea has been created nowadays. It has become the most important invasive plant of China’s coastal tidal flats. Although S. alterniflora had some ecological and economic functions, it has also brought series of hazards. It has ecological hazards on many aspects, including biomass and biodiversity, hydrodynamic force and sedimentation process, nutrient cycling of tidal flat, succession sequence of soil and plants and so on. On engineering, it courses problems of poor drainage and channel blocking. On economy, the hazard mainly reflected in the threat on aquaculture industry. The purpose of this study is to explore an ecological, feasible and economical way to manage Spartina alterniflora and use the land formed by it, taking Xihu Harbor in Xiangshan County as a case. Comparison method, mathematical modeling, qualitative and quantitative analysis are utilized to proceed the study. Main outcomes are as follows. By comparing a series of S. alterniflora managing methods which include the combination of mechanical cutting and hydraulic reclamation, waterlogging, herbicide and biological substitution from three standpoints – ecology, engineering and economy. It is inferred that the combination of mechanical cutting and hydraulic reclamation is among the top rank of S. alternifora managing methods. The combination of mechanical cutting and hydraulic reclamation means using large-scale mechanical equipment like large screw seagoing dredger to excavate the S. alterniflora with root and mud together. Then the mix of mud and grass was blown off nearby coastal tidal zone transported by pipelines, which can cushion the silt of tidal zone to form a land. However, as man-made land by coast, the reclamation area’s ecological sensitivity is quite high and will face high possibility of flood threat. Therefore, the reclamation area has many reasonability requirements, including ones on location, specific scope, water surface rate, direction of main watercourse, site of water-gate, the ratio of ecological land to urban construction land. These requirements all became important basis when the planning was being made. The water system planning, green space system planning, road structure and land use all need to accommodate the ecological requests. Besides, the profits from the formed land is the managing project’s source of funding, so how to utilize land efficiently is another considered point in the planning. It is concluded that by aiming at managing a large area of S. alterniflora, the combination of mechanical cutting and hydraulic reclamation is an ecological, feasible and economical method. The planning of reclamation area should fully respect the natural environment and possible disasters. Then the planning which makes land use efficient, reasonable, ecological will promote the development of the area’s city construction.

Keywords: ecological management, ecological planning method, reclamation area, Spartina alternifora, Xihu harbor

Procedia PDF Downloads 308
208 Transcriptional Differences in B cell Subpopulations over the Course of Preclinical Autoimmunity Development

Authors: Aleksandra Bylinska, Samantha Slight-Webb, Kevin Thomas, Miles Smith, Susan Macwana, Nicolas Dominguez, Eliza Chakravarty, Joan T. Merrill, Judith A. James, Joel M. Guthridge

Abstract:

Background: Systemic Lupus Erythematosus (SLE) is an interferon-related autoimmune disease characterized by B cell dysfunction. One of the main hallmarks is a loss of tolerance to self-antigens leading to increased levels of autoantibodies against nuclear components (ANAs). However, up to 20% of healthy ANA+ individuals will not develop clinical illness. SLE is more prevalent among women and minority populations (African, Asian American and Hispanics). Moreover, African Americans have a stronger interferon (IFN) signature and develop more severe symptoms. The exact mechanisms involved in ethnicity-dependent B cell dysregulation and the progression of autoimmune disease from ANA+ healthy individuals to clinical disease remains unclear. Methods: Peripheral blood mononuclear cells (PBMCs) from African (AA) and European American (EA) ANA- (n=12), ANA+ (n=12) and SLE (n=12) individuals were assessed by multimodal scRNA-Seq/CITE-Seq methods to examine differential gene signatures in specific B cell subsets. Library preparation was done with a 10X Genomics Chromium according to established protocols and sequenced on Illumina NextSeq. The data were further analyzed for distinct cluster identification and differential gene signatures in the Seurat package in R and pathways analysis was performed using Ingenuity Pathways Analysis (IPA). Results: Comparing all subjects, 14 distinct B cell clusters were identified using a community detection algorithm and visualized with Uniform Manifold Approximation Projection (UMAP). The proportion of each of those clusters varied by disease status and ethnicity. Transitional B cells trended higher in ANA+ healthy individuals, especially in AA. Ribonucleoprotein high population (HNRNPH1 elevated, heterogeneous nuclear ribonucleoprotein, RNP-Hi) of proliferating Naïve B cells were more prevalent in SLE patients, specifically in EA. Interferon-induced protein high population (IFIT-Hi) of Naive B cells are increased in EA ANA- individuals. The proportion of memory B cells and plasma cells clusters tend to be expanded in SLE patients. As anticipated, we observed a higher signature of cytokine-related pathways, especially interferon, in SLE individuals. Pathway analysis among AA individuals revealed an NRF2-mediated Oxidative Stress response signature in the transitional B cell cluster, not seen in EA individuals. TNFR1/2 and Sirtuin Signaling pathway genes were higher in AA IFIT-Hi Naive B cells, whereas they were not detected in EA individuals. Interferon signaling was observed in B cells in both ethnicities. Oxidative phosphorylation was found in age-related B cells (ABCs) for both ethnicities, whereas Death Receptor Signaling was found only in EA patients in these cells. Interferon-related transcription factors were elevated in ABCs and IFIT-Hi Naive B cells in SLE subjects of both ethnicities. Conclusions: ANA+ healthy individuals have altered gene expression pathways in B cells that might drive apoptosis and subsequent clinical autoimmune pathogenesis. Increases in certain regulatory pathways may delay progression to SLE. Further, AA individuals have more elevated activation pathways that may make them more susceptible to SLE.

Keywords:

Procedia PDF Downloads 174
207 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection

Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten

Abstract:

Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.

Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection

Procedia PDF Downloads 334
206 Mineralized Nanoparticles as a Contrast Agent for Ultrasound and Magnetic Resonance Imaging

Authors: Jae Won Lee, Kyung Hyun Min, Hong Jae Lee, Sang Cheon Lee

Abstract:

To date, imaging techniques have attracted much attention in medicine because the detection of diseases at an early stage provides greater opportunities for successful treatment. Consequently, over the past few decades, diverse imaging modalities including magnetic resonance (MR), positron emission tomography, computed tomography, and ultrasound (US) have been developed and applied widely in the field of clinical diagnosis. However, each of the above-mentioned imaging modalities possesses unique strengths and intrinsic weaknesses, which limit their abilities to provide accurate information. Therefore, multimodal imaging systems may be a solution that can provide improved diagnostic performance. Among the current medical imaging modalities, US is a widely available real-time imaging modality. It has many advantages including safety, low cost and easy access for patients. However, its low spatial resolution precludes accurate discrimination of diseased region such as cancer sites. In contrast, MR has no tissue-penetrating limit and can provide images possessing exquisite soft tissue contrast and high spatial resolution. However, it cannot offer real-time images and needs a comparatively long imaging time. The characteristics of these imaging modalities may be considered complementary, and the modalities have been frequently combined for the clinical diagnostic process. Biominerals such as calcium carbonate (CaCO3) and calcium phosphate (CaP) exhibit pH-dependent dissolution behavior. They demonstrate pH-controlled drug release due to the dissolution of minerals in acidic pH conditions. In particular, the application of this mineralization technique to a US contrast agent has been reported recently. The CaCO3 mineral reacts with acids and decomposes to generate calcium dioxide (CO2) gas in an acidic environment. These gas-generating mineralized nanoparticles generated CO2 bubbles in the acidic environment of the tumor, thereby allowing for strong echogenic US imaging of tumor tissues. On the basis of this previous work, it was hypothesized that the loading of MR contrast agents into the CaCO3 mineralized nanoparticles may be a novel strategy in designing a contrast agent for dual imaging. Herein, CaCO3 mineralized nanoparticles that were capable of generating CO2 bubbles to trigger the release of entrapped MR contrast agents in response to tumoral acidic pH were developed for the purposes of US and MR dual-modality imaging of tumors. Gd2O3 nanoparticles were selected as an MR contrast agent. A key strategy employed in this study was to prepare Gd2O3 nanoparticle-loaded mineralized nanoparticles (Gd2O3-MNPs) using block copolymer-templated CaCO3 mineralization in the presence of calcium cations (Ca2+), carbonate anions (CO32-) and positively charged Gd2O3 nanoparticles. The CaCO3 core was considered suitable because it may effectively shield Gd2O3 nanoparticles from water molecules in the blood (pH 7.4) before decomposing to generate CO2 gas, triggering the release of Gd2O3 nanoparticles in tumor tissues (pH 6.4~7.4). The kinetics of CaCO3 dissolution and CO2 generation from the Gd2O3-MNPs were examined as a function of pH and pH-dependent in vitro magnetic relaxation; additionally, the echogenic properties were estimated to demonstrate the potential of the particles for the tumor-specific US and MR imaging.

Keywords: calcium carbonate, mineralization, ultrasound imaging, magnetic resonance imaging

Procedia PDF Downloads 235
205 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 226
204 Resveratrol Ameliorates Benzo(a)Pyrene Induced Testicular Dysfunction and Apoptosis: Involvement of p38 MAPK/ATF2/iNOS Signaling

Authors: Kuladip Jana, Bhaswati Banerjee, Parimal C. Sen

Abstract:

Benzo(a)pyrene [B(a)P] is an environmental toxicant present mostly in cigarette smoke and car exhaust, is an aryl hydrocarbon receptor (AhR) ligand that exerts its toxic effects on both male and female reproductive systems along with carcinogenesis in skin, prostate, ovary, lung and mammary glands. Our study was focused on elucidating the molecular mechanism of B(a)P induced male reproductive toxicity and its prevention with phytochemical like resveratrol. In this study, the effect of B(a)P at different doses (0.1, 0.25, 0.5, 1 and 5 mg /kg body weight) was studied on male reproductive system of Wistar rat. A significant decrease in cauda epididymal sperm count and motility along with the presence of sperm head abnormalities and altered epididymal and testicular histology were documented following B(a)P treatment. B(a)P treatment resulted apoptotic sperm cells as observed by TUNEL and Annexin V-PI assay with increased Reactive Oxygen Species (ROS), altered sperm mitochondrial membrane potential (ΔΨm) with a simultaneous decrease in the activity of antioxidant enzymes and GSH status. TUNEL positive apoptotic cells also observed in testis as well as isolated germ and Leydig cells following B(a)P exposure. Western Blot analysis revealed the activation of p38 mitogen activated protein kinase (p38MAPK), cytosolic translocation of cytochrome-c, upregulation of Bax and inducible nitric oxide synthase (iNOS) with cleavage of poly ADP ribose polymerase (PARP) and down regulation of BCl2 in testis upon B(a)P treatment. The protein and mRNA levels of testicular key steroidogenesis regulatory proteins like steroidogenic acute regulatory protein (StAR), cytochrome P450 IIA1 (CYPIIA1), 3β hydroxy steroid dehydrogenase (3β HSD), 17β hydroxy steroid dehydrogenase (17β HSD) showed a significant decrease in a dose dependent manner while an increase in the expression of cytochrome P450 1A1 (CYP1A1), Aryl hydrocarbon Receptor (AhR), active caspase- 9 and caspase- 3 following B(a)P exposure. We conclude that exposure of benzo(a)pyrene caused testicular gamatogenic and steroidogenic disorders by induction of oxidative stress, inhibition of StAR and other steroidogenic enzymes along with activation of p38MAPK and initiated caspase-3 mediated germ and Leydig cell apoptosis. Next we investigated the role of resveratrol on B(a)P induced male reproductive toxicity. Our study highlighted that resveratrol co-treatment with B(a)P maintained testicular redox potential, increased serum testosterone level and prevented steroidogenic dysfunction with enhanced expression of major testicular steroidogenic proteins (CYPIIA1, StAR, 3β HSD,17β HSD) relative to treatment with B(a)P only. Resveratrol suppressed B(a)P-induced testicular activation of p38 MAPK, ATF2, iNOS and ROS production; cytosolic translocation of Cytochome c and Caspase 3 activation thereby prevented oxidative stress of testis and inhibited apoptosis. Resveratrol co-treatment also decreased B(a)P-induced AhR protein level, its nuclear translocation and subsequent CYP1A1 promoter activation, thereby decreased protein and mRNA levels of testicular cytochrome P4501A1 (CYP1A1) and prevented BPDE-DNA adduct formation. Our findings cumulatively suggest that resveratrol prevents activation of B(a)P by modulating the transcriptional regulation of CYP1A1 and acting as an antioxidant thus prevents B(a)P-induced oxidative stress and testicular apoptosis.

Keywords: benzo(a)pyrene, resveratrol, testis, apoptosis, cytochrome P450 1A1 (CYP1A1), aryl hydrocarbon receptor (AhR), p38 MAPK/ATF2/iNOS

Procedia PDF Downloads 230
203 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 89
202 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 355
201 Mobi-DiQ: A Pervasive Sensing System for Delirium Risk Assessment in Intensive Care Unit

Authors: Subhash Nerella, Ziyuan Guan, Azra Bihorac, Parisa Rashidi

Abstract:

Intensive care units (ICUs) provide care to critically ill patients in severe and life-threatening conditions. However, patient monitoring in the ICU is limited by the time and resource constraints imposed on healthcare providers. Many critical care indices such as mobility are still manually assessed, which can be subjective, prone to human errors, and lack granularity. Other important aspects, such as environmental factors, are not monitored at all. For example, critically ill patients often experience circadian disruptions due to the absence of effective environmental “timekeepers” such as the light/dark cycle and the systemic effect of acute illness on chronobiologic markers. Although the occurrence of delirium is associated with circadian disruption risk factors, these factors are not routinely monitored in the ICU. Hence, there is a critical unmet need to develop systems for precise and real-time assessment through novel enabling technologies. We have developed the mobility and circadian disruption quantification system (Mobi-DiQ) by augmenting biomarker and clinical data with pervasive sensing data to generate mobility and circadian cues related to mobility, nightly disruptions, and light and noise exposure. We hypothesize that Mobi-DiQ can provide accurate mobility and circadian cues that correlate with bedside clinical mobility assessments and circadian biomarkers, ultimately important for delirium risk assessment and prevention. The collected multimodal dataset consists of depth images, Electromyography (EMG) data, patient extremity movement captured by accelerometers, ambient light levels, Sound Pressure Level (SPL), and indoor air quality measured by volatile organic compounds, and the equivalent CO₂ concentration. For delirium risk assessment, the system recognizes mobility cues (axial body movement features and body key points) and circadian cues, including nightly disruptions, ambient SPL, and light intensity, as well as other environmental factors such as indoor air quality. The Mobi-DiQ system consists of three major components: the pervasive sensing system, a data storage and analysis server, and a data annotation system. For data collection, six local pervasive sensing systems were deployed, including a local computer and sensors. A video recording tool with graphical user interface (GUI) developed in python was used to capture depth image frames for analyzing patient mobility. All sensor data is encrypted, then automatically uploaded to the Mobi-DiQ server through a secured VPN connection. Several data pipelines are developed to automate the data transfer, curation, and data preparation for annotation and model training. The data curation and post-processing are performed on the server. A custom secure annotation tool with GUI was developed to annotate depth activity data. The annotation tool is linked to the MongoDB database to record the data annotation and to provide summarization. Docker containers are also utilized to manage services and pipelines running on the server in an isolated manner. The processed clinical data and annotations are used to train and develop real-time pervasive sensing systems to augment clinical decision-making and promote targeted interventions. In the future, we intend to evaluate our system as a clinical implementation trial, as well as to refine and validate it by using other data sources, including neurological data obtained through continuous electroencephalography (EEG).

Keywords: deep learning, delirium, healthcare, pervasive sensing

Procedia PDF Downloads 91