Search results for: stable platform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3769

Search results for: stable platform

79 Fostering Non-Traditional Student Success in an Online Music Appreciation Course

Authors: Linda Fellag, Arlene Caney

Abstract:

E-learning has earned an essential place in academia because it promotes learner autonomy, student engagement, and technological aptitude, and allows for flexible learning. However, despite advantages, educators have been slower to embrace e-learning for ESL and other non-traditional students for fear that such students will not succeed without the direct faculty contact and academic support of face-to-face classrooms. This study aims to determine if a non-traditional student-friendly online course can produce student retention and performance rates that compare favorably with those of students in standard online sections of the same course aimed at traditional college-level students. One Music faculty member is currently collaborating with an English instructor to redesign an online college-level Music Appreciation course for non-traditional college students. At Community College of Philadelphia, Introduction to Music Appreciation was recently designated as one of the few college-level courses that advanced ESL, and developmental English students can take while completing their language studies. Beginning in Fall 2017, the course will be critical for international students who must maintain full-time student status under visa requirements. In its current online format, however, Music Appreciation is designed for traditional college students, and faculty who teach these sections have been reluctant to revise the course to address the needs of non-traditional students. Interestingly, presenters maintain that the online platform is the ideal place to develop language and college readiness skills in at-risk students while maintaining the course's curricular integrity. The two faculty presenters describe how curriculum rather than technology drives the redesign of the digitized music course, and self-study materials, guided assignments, and periodic assessments promote independent learning and comprehension of material. The 'scaffolded' modules allow ESL and developmental English students to build on prior knowledge, preview key vocabulary, discuss content, and complete graded tasks that demonstrate comprehension. Activities and assignments, in turn, enhance college success by allowing students to practice academic reading strategies, writing, speaking, and student-faculty and peer-peer communication and collaboration. The course components facilitate a comparison of student performance and retention in sections of the redesigned and existing online sections of Music Appreciation as well as in previous sections with at-risk students. Indirect, qualitative measures include student attitudinal surveys and evaluations. Direct, quantitative measures include withdrawal rates, tests of disciplinary knowledge, and final grades. The study will compare the outcomes of three cohorts in the two versions of the online course: ESL students, at-risk developmental students, and college-level students. These data will also be compared with retention and student outcomes data of the three cohorts in f2f Music Appreciation, which permitted non-traditional student enrollment from 1998-2005. During this eight-year period, the presenter addressed the problems of at-risk students by adding language and college success support, which resulted in strong retention and outcomes. The presenters contend that the redesigned course will produce favorable outcomes among all three cohorts because it contains components which proved successful with at-risk learners in f2f sections of the course. Results of their study will be published in 2019 after the redesigned online course has met for two semesters.

Keywords: college readiness, e-learning, music appreciation, online courses

Procedia PDF Downloads 177
78 Hybrid Materials on the Basis of Magnetite and Magnetite-Gold Nanoparticles for Biomedical Application

Authors: Mariia V. Efremova, Iana O. Tcareva, Anastasia D. Blokhina, Ivan S. Grebennikov, Anastasia S. Garanina, Maxim A. Abakumov, Yury I. Golovin, Alexander G. Savchenko, Alexander G. Majouga, Natalya L. Klyachko

Abstract:

During last decades magnetite nanoparticles (NPs) attract a deep interest of scientists due to their potential application in therapy and diagnostics. However, magnetite nanoparticles are toxic and non-stable in physiological conditions. To solve these problems, we decided to create two types of hybrid systems based on magnetite and gold which is inert and biocompatible: gold as a shell material (first type) and gold as separate NPs interfacially bond to magnetite NPs (second type). The synthesis of the first type hybrid nanoparticles was carried out as follows: Magnetite nanoparticles with an average diameter of 9±2 nm were obtained by co-precipitation of iron (II, III) chlorides then they were covered with gold shell by iterative reduction of hydrogen tetrachloroaurate with hydroxylamine hydrochloride. According to the TEM, ICP MS and EDX data, final nanoparticles had an average diameter of 31±4 nm and contained iron even after hydrochloric acid treatment. However, iron signals (K-line, 7,1 keV) were not localized so we can’t speak about one single magnetic core. Described nanoparticles covered with mercapto-PEG acid were non-toxic for human prostate cancer PC-3/ LNCaP cell lines (more than 90% survived cells as compared to control) and had high R2-relaxivity rates (>190 mМ-1s-1) that exceed the transverse relaxation rate of commercial MRI-contrasting agents. These nanoparticles were also used for chymotrypsin enzyme immobilization. The effect of alternating magnetic field on catalytic properties of chymotrypsin immobilized on magnetite nanoparticles, notably the slowdown of catalyzed reaction at the level of 35-40 % was found. The synthesis of the second type hybrid nanoparticles also involved two steps. Firstly, spherical gold nanoparticles with an average diameter of 9±2 nm were synthesized by the reduction of hydrogen tetrachloroaurate with oleylamine; secondly, they were used as seeds during magnetite synthesis by thermal decomposition of iron pentacarbonyl in octadecene. As a result, so-called dumbbell-like structures were obtained where magnetite (cubes with 25±6 nm diagonal) and gold nanoparticles were connected together pairwise. By HRTEM method (first time for this type of structure) an epitaxial growth of magnetite nanoparticles on gold surface with co-orientation of (111) planes was discovered. These nanoparticles were transferred into water by means of block-copolymer Pluronic F127 then loaded with anti-cancer drug doxorubicin and also PSMA-vector specific for LNCaP cell line. Obtained nanoparticles were found to have moderate toxicity for human prostate cancer cells and got into the intracellular space after 45 minutes of incubation (according to fluorescence microscopy data). These materials are also perspective from MRI point of view (R2-relaxivity rates >70 mМ-1s-1). Thereby, in this work magnetite-gold hybrid nanoparticles, which have a strong potential for biomedical application, particularly in targeted drug delivery and magnetic resonance imaging, were synthesized and characterized. That paves the way to the development of special medicine types – theranostics. The authors knowledge financial support from Ministry of Education and Science of the Russian Federation (14.607.21.0132, RFMEFI60715X0132). This work was also supported by Grant of Ministry of Education and Science of the Russian Federation К1-2014-022, Grant of Russian Scientific Foundation 14-13-00731 and MSU development program 5.13.

Keywords: drug delivery, magnetite-gold, MRI contrast agents, nanoparticles, toxicity

Procedia PDF Downloads 382
77 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute

Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández

Abstract:

The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.

Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge

Procedia PDF Downloads 226
76 Differential Expression Analysis of Busseola fusca Larval Transcriptome in Response to Cry1Ab Toxin Challenge

Authors: Bianca Peterson, Tomasz J. Sańko, Carlos C. Bezuidenhout, Johnnie Van Den Berg

Abstract:

Busseola fusca (Fuller) (Lepidoptera: Noctuidae), the maize stem borer, is a major pest in sub-Saharan Africa. It causes economic damage to maize and sorghum crops and has evolved non-recessive resistance to genetically modified (GM) maize expressing the Cry1Ab insecticidal toxin. Since B. fusca is a non-model organism, very little genomic information is publicly available, and is limited to some cytochrome c oxidase I, cytochrome b, and microsatellite data. The biology of B. fusca is well-described, but still poorly understood. This, in combination with its larval-specific behavior, may pose problems for limiting the spread of current resistant B. fusca populations or preventing resistance evolution in other susceptible populations. As part of on-going research into resistance evolution, B. fusca larvae were collected from Bt and non-Bt maize in South Africa, followed by RNA isolation (15 specimens) and sequencing on the Illumina HiSeq 2500 platform. Quality of reads was assessed with FastQC, after which Trimmomatic was used to trim adapters and remove low quality, short reads. Trinity was used for the de novo assembly, whereas TransRate was used for assembly quality assessment. Transcript identification employed BLAST (BLASTn, BLASTp, and tBLASTx comparisons), for which two libraries (nucleotide and protein) were created from 3.27 million lepidopteran sequences. Several transcripts that have previously been implicated in Cry toxin resistance was identified for B. fusca. These included aminopeptidase N, cadherin, alkaline phosphatase, ATP-binding cassette transporter proteins, and mitogen-activated protein kinase. MEGA7 was used to align these transcripts to reference sequences from Lepidoptera to detect mutations that might potentially be contributing to Cry toxin resistance in this pest. RSEM and Bioconductor were used to perform differential gene expression analysis on groups of B. fusca larvae challenged and unchallenged with the Cry1Ab toxin. Pairwise expression comparisons of transcripts that were at least 16-fold expressed at a false-discovery corrected statistical significance (p) ≤ 0.001 were extracted and visualized in a hierarchically clustered heatmap using R. A total of 329,194 transcripts with an N50 of 1,019 bp were generated from the over 167.5 million high-quality paired-end reads. Furthermore, 110 transcripts were over 10 kbp long, of which the largest one was 29,395 bp. BLAST comparisons resulted in identification of 157,099 (47.72%) transcripts, among which only 3,718 (2.37%) were identified as Cry toxin receptors from lepidopteran insects. According to transcript expression profiles, transcripts were grouped into three subclusters according to the similarity of their expression patterns. Several immune-related transcripts (pathogen recognition receptors, antimicrobial peptides, and inhibitors) were up-regulated in the larvae feeding on Bt maize, indicating an enhanced immune status in response to toxin exposure. Above all, extremely up-regulated arylphorin genes suggest that enhanced epithelial healing is one of the resistance mechanisms employed by B. fusca larvae against the Cry1Ab toxin. This study is the first to provide a resource base and some insights into a potential mechanism of Cry1Ab toxin resistance in B. fusca. Transcriptomic data generated in this study allows identification of genes that can be targeted by biotechnological improvements of GM crops.

Keywords: epithelial healing, Lepidoptera, resistance, transcriptome

Procedia PDF Downloads 204
75 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India

Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony

Abstract:

The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.

Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns

Procedia PDF Downloads 210
74 Insights on Nitric Oxide Interaction with Phytohormones in Rice Root System Response to Metal Stress

Authors: Piacentini Diego, Della Rovere Federica, Fattorini Laura, Lanni Francesca, Cittadini Martina, Altamura Maria Maddalena, Falasca Giuseppina

Abstract:

Plants have evolved sophisticated mechanisms to cope with environmental cues. Changes in intracellular content and distribution of phytohormones, such as the auxin indole-3-acetic acid (IAA), have been involved in morphogenic adaptation to environmental stresses. In addition to phytohormones, plants can rely on a plethora of small signal molecules able to promptly sense and transduce the stress signals, resulting in morpho/physiological responses thanks also to their capacity to modulate the levels/distribution/reception of most hormones. Among these signaling molecules, nitrogen monoxide (nitric oxide – NO) is a critical component in several plant acclimation strategies to both biotic and abiotic stresses. Depending on its levels, NO increases plant adaptation by enhancing the enzymatic or non-enzymatic antioxidant systems or by acting as a direct scavenger of reactive oxygen/nitrogen (ROS/RNS) species produced during the stress. In addition, exogenous applications of NO-specific donor compounds showed the involvement of the signal molecule in auxin metabolism, transport, and signaling, under both physiological and stress conditions. However, the complex mechanisms underlying NO action in interacting with phytohormones, such as auxins, during metal stress responses are still poorly understood and need to be better investigated. Emphasis must be placed on the response of the root system since it is the first plant organ system to be exposed to metal soil pollution. The monocot Oryza sativa L. (rice) has been chosen given its importance as a stable food for some 4 billion people worldwide. In addition, increasing evidence has shown that rice is often grown in contaminated paddy soils with high levels of heavy metal cadmium (Cd) and metalloid arsenic (As). The facility through which these metals are taken up by rice roots and transported to the aerial organs up to the edible caryopses makes rice one of the most relevant sources of these pollutants for humans. This study aimed to evaluate if NO has a mitigatory activity in the roots of rice seedlings against Cd or As toxicity and to understand if this activity requires interactions with auxin. Our results show that exogenous treatments with the NO-donor SNP alleviate the stress induced by Cd, but not by As, in in-vitro-grown rice seedlings through increased intracellular root NO levels. The damages induced by the pollutants include root growth inhibition, root histological alterations and ROS (H2O2, O2●ˉ), and RNS (ONOOˉ) production. Also, SNP treatments mitigate both the root increase in root IAA levels and the IAA alteration in distribution monitored by the OsDR5::GUS system due to the toxic metal exposure. Notably, the SNP-induced mitigation of the IAA homeostasis altered by the pollutants does not involve changes in the expression of OsYUCCA1 and ASA2 IAA-biosynthetic genes. Taken together, the results highlight a mitigating role of NO in the rice root system, which is pollutant-specific, and involves the interaction of the signal molecule with both IAA and brassinosteroids at different (i.e., transport, levels, distribution) and multiple levels (i.e., transcriptional/post-translational levels). The research is supported by Progetti Ateneo Sapienza University of Rome, grant number: RG120172B773D1FF

Keywords: arsenic, auxin, cadmium, nitric oxide, rice, root system

Procedia PDF Downloads 80
73 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 258
72 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 85
71 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 225
70 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools

Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri

Abstract:

The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.

Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq

Procedia PDF Downloads 119
69 Exploring Perspectives and Complexities of E-tutoring: Insights from Students Opting out of Online Tutor Service

Authors: Prince Chukwuneme Enwereji, Annelien Van Rooyen

Abstract:

In recent years, technology integration in education has transformed the learning landscape, particularly in online institutions. One technological advancement that has gained popularity is e-tutoring, which offers personalised academic support to students through online platforms. While e-tutoring has become well-known and has been adopted to promote collaborative learning, there are still students who do not use these services for various reasons. However, little attention has been given to understanding the perspectives of students who have not utilized these services. The research objectives include identifying the perceived benefits that non-e-tutoring students believe e-tutoring could offer, such as enhanced academic support, personalized learning experiences, and improved performance. Additionally, the study explored the potential drawbacks or concerns that non-e-tutoring students associate with e-tutoring, such as concerns about efficacy, a lack of face-to-face interaction, and platform accessibility. The study adopted a quantitative research approach with a descriptive design to gather and analyze data on non-e-tutoring students' perspectives. Online questionnaires were employed as the primary data collection method, allowing for the efficient collection of data from many participants. The collected data was analyzed using the Statistical Package for the Social Sciences (SPSS). Ethical concepts such as informed consent, anonymity of responses and protection of respondents against harm were maintained. Findings indicate that non-e-tutoring students perceive a sense of control over their own pace of learning, suggesting a preference for self-directed learning and the ability to tailor their educational experience to their individual needs and learning styles. They also exhibit high levels of motivation, believe in their ability to effectively participate in their studies and organize their academic work, and feel comfortable studying on their own without the help of e-tutors. However, non-e-tutoring students feel that e-tutors do not sufficiently address their academic needs and lack engagement. They also perceive a lack of clarity in the roles of e-tutors, leading to uncertainty about their responsibilities. In terms of communication, students feel overwhelmed by the volume of announcements and find repetitive information frustrating. Additionally, some students face challenges with their internet connection and associated cost, which can hinder their participation in online activities. Furthermore, non-e-tutoring students express a desire for interactions with their peers and a sense of belonging to a group or team. They value opportunities for collaboration, teamwork in their learning experience, the importance of fostering social interactions and creating a sense of community in online learning environments. This study recommended that students seek alternate support systems by reaching out to professors or academic advisors for guidance and clarification. Developing self-directed learning skills is essential, empowering students to take charge of their own learning through setting objectives, creating own study plans, and utilising resources. For HEIs, it was recommended that they should ensure that a variety of support services are available to cater to the needs of all students, including non-e-tutoring students. HEIs should also ensure easy access to online resources, promote a supportive community, and regularly evaluate and adapt their support techniques to meet students' changing requirements.

Keywords: online-tutor;, student support;, online education, educational practices, distance education

Procedia PDF Downloads 83
68 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 191
67 Relevance of Dosing Time for Everolimus Toxicity in Respect to the Circadian P-Glycoprotein Expression in Mdr1a::Luc Mice

Authors: Narin Ozturk, Xiao-Mei Li, Sylvie Giachetti, Francis Levi, Alper Okyar

Abstract:

P-glycoprotein (P-gp, MDR1, ABCB1) is a transmembrane protein acting as an ATP-dependent efflux pump and functions as a biological barrier by extruding drugs and xenobiotics out of cells in healthy tissues especially in intestines, liver and brain as well as in tumor cells. The circadian timing system controls a variety of biological functions in mammals including xenobiotic metabolism and detoxification, proliferation and cell cycle events, and may affect pharmacokinetics, toxicity and efficacy of drugs. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer drug that is active against many cancers, and its pharmacokinetics depend on P-gp. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus with respect to the intestinal P-gp expression rhythms in mdr1a::Luc mice using Real Time-Biolumicorder (RT-BIO) System. Mdr1a::Luc male mice were synchronized with 12 h of Light and 12 h of Dark (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding Light onset). After 1-week baseline recordings, everolimus (5 mg/kg/day x 14 days) was administered orally at ZT1-resting period- and ZT13-activity period- to mdr1a::Luc mice singly housed in an innovative monitoring device, Real Time-Biolumicorder units which let us monitor real-time and long-term gene expression in freely moving mice. D-luciferin (1.5 mg/mL) was dissolved in drinking water. Mouse intestinal mdr1a::Luc oscillation profile reflecting P-gp gene expression and locomotor activity pattern were recorded every minute with the photomultiplier tube and infrared sensor respectively. General behavior and clinical signs were monitored, and body weight was measured every day as an index of toxicity. Drug-induced body weight change was expressed relative to body weight on the initial treatment day. Statistical significance of differences between groups was validated with ANOVA. Circadian rhythms were validated with Cosinor Analysis. Everolimus toxicity changed as a function of drug timing, which was least following dosing at ZT13, near the onset of the activity span in male mice. Mean body weight loss was nearly twice as large in mice treated with 5 mg/kg everolimus at ZT1 as compared to ZT13 (8.9% vs. 5.4%; ANOVA, p < 0.001). Based on the body weight loss and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13. Both rest-activity and mdr1a::Luc expression displayed stable 24-h periodic rhythms before everolimus and in both vehicle-treated controls. Real-time bioluminescence pattern of mdr1a revealed a circadian rhythm with a 24-h period with an acrophase at ZT16 (Cosinor, p < 0.001). Mdr1a expression remained rhythmic in everolimus-treated mice, whereas down-regulation was observed in P-gp expression in 2 of 4 mice. The study identified the circadian pattern of intestinal P-gp expression with an unprecedented precision. The circadian timing depending on the P-gp expression rhythms may play a crucial role in the tolerability/toxicity of everolimus. The circadian changes in mdr1a genes deserve further studies regarding their relevance for in vitro and in vivo chronotolerance of mdr1a-transported anticancer drugs. Chronotherapy with P-gp-effluxed anticancer drugs could then be applied according to their rhythmic patterns in host and tumor to jointly maximize treatment efficacy and minimize toxicity.

Keywords: circadian rhythm, chronotoxicity, everolimus, mdr1a::Luc mice, p-glycoprotein

Procedia PDF Downloads 342
66 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction

Authors: Mingxin Li, Liya Ni

Abstract:

Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.

Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning

Procedia PDF Downloads 133
65 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation

Authors: Maria Lazari, Lorenzo Sanavia

Abstract:

Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.

Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity

Procedia PDF Downloads 225
64 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 238
63 Enabling Rather Than Managing: Organizational and Cultural Innovation Mechanisms in a Heterarchical Organization

Authors: Sarah M. Schoellhammer, Stephen Gibb

Abstract:

Bureaucracy, in particular, its core element, a formal and stable hierarchy of authority, is proving less and less appropriate under the conditions of today’s knowledge economy. Centralization and formalization were consistently found to hinder innovation, undermining cross-functional collaboration, personal responsibility, and flexibility. With its focus on systematical planning, controlling and monitoring the development of new or improved solutions for customers, even innovation management as a discipline is to a significant extent based on a mechanistic understanding of organizations. The most important drivers of innovation, human creativity, and initiative, however, can be more hindered than supported by central elements of classic innovation management, such as predefined innovation strategies, rigid stage gate processes, and decisions made in management gate meetings. Heterarchy, as an alternative network form of organization, is essentially characterized by its dynamic influence structures, whereby the biggest influence is allocated by the collective to the persons perceived the most competent in a certain issue. Theoretical arguments that the non-hierarchical concept better supports innovation than bureaucracy have been supported by empirical research. These prior studies either focus on the structure and general functioning of non-hierarchical organizations or on their innovativeness, that means innovation as an outcome. Complementing classic innovation management approaches, this work aims to shed light on how innovations are initiated and realized in heterarchies in order to identify alternative solutions practiced under conditions of the post-bureaucratic organization. Through an initial individual case study, which is part of a multiple-case project, the innovation practices of an innovative and highly heterarchical medium-sized company in the German fire engineering industry are investigated. In a pragmatic mixed methods approach media resonance, company documents, and workspace architecture are analyzed, in addition to qualitative interviews with the CEO and employees of the case company, as well as a quantitative survey aiming to characterize the company along five scaled dimensions of a heterarchy spectrum. The analysis reveals some similarities and striking differences to approaches suggested by classic innovation management. The studied heterarchy has no predefined innovation strategy guiding new product and service development. Instead, strategic direction is provided by the CEO, described as visionary and creative. Procedures for innovation are hardly formalized, with new product ideas being evaluated on the basis of gut feeling and flexible, rather general criteria. Employees still being hesitant to take responsibility and make decisions, hierarchical influence is still prominent. Described as open-minded and collaborative, culture and leadership were found largely congruent with definitions of innovation culture. Overall, innovation efforts at the case company tend to be coordinated more through cultural than through formal organizational mechanisms. To better enable innovation in mainstream organizations, responsible practitioners are recommended not to limit changes to reducing the central elements of the bureaucratic organization, formalization, and centralization. The freedoms this entails need to be sustained through cultural coordination mechanisms, with personal initiative and responsibility by employees as well as common innovation-supportive norms and values. These allow to integrate diverse competencies, opinions, and activities and, thus, to guide innovation efforts.

Keywords: bureaucracy, heterarchy, innovation management, values

Procedia PDF Downloads 189
62 Clinical Efficacy of Localized Salvage Prostate Cancer Reirradiation with Proton Scanning Beam Therapy

Authors: Charles Shang, Salina Ramirez, Stephen Shang, Maria Estrada, Timothy R. Williams

Abstract:

Purpose: Over the past decade, proton therapy utilizing pencil beam scanning has emerged as a preferred treatment modality in radiation oncology, particularly for prostate cancer. This retrospective study aims to assess the clinical and radiobiological efficacy of proton scanning beam therapy in the treatment of localized salvage prostate cancer, following initial radiation therapy with a different modality. Despite the previously delivered high radiation doses, this investigation explores the potential of proton reirradiation in controlling recurrent prostate cancer and detrimental quality of life side effects. Methods and Materials: A retrospective analysis was conducted on 45 cases of locally recurrent prostate cancer that underwent salvage proton reirradiation. Patients were followed for 24.6 ± 13.1 months post-treatment. These patients had experienced an average remission of 8.5 ± 7.9 years after definitive radiotherapy for localized prostate cancer (n=41) or post-prostatectomy (n=4), followed by rising PSA levels. Recurrent disease was confirmed by FDG-PET (n=31), PSMA-PET (n=10), or positive local biopsy (n=4). Gross tumor volume (GTV) was delineated based on PET and MR imaging, with the planning target volume (PTV) expanding to an average of 10.9 cm³. Patients received proton reirradiation using two oblique coplanar beams, delivering total doses ranging from 30.06 to 60.00 GyE in 17–30 fractions. All treatments were administered using the ProBeam Compact system with CT image guidance. The International Prostate Symptom Scores (IPSS) and prostate-specific antigen (PSA) levels were evaluated to assess treatment-related toxicity and tumor control. Results and Discussions: In this cohort (mean age: 76.7 ± 7.3 years), 60% (27/45) of patients showed sustained reductions in PSA levels post-treatment, while 36% (16/45) experienced a PSA decline of more than 0.8 ng/mL. Additionally, 73% (33/45) of patients exhibited an initial PSA reduction, though some showed later PSA increases, indicating the potential presence of undetected metastatic lesions. The median post-retreatment IPSS score was 4, significantly lower than scores reported in other treatment studies. Overall, 69% of patients reported mild urinary symptoms, with 96% (43/45) experiencing mild to moderate symptoms. Three patients experienced grade I or II proctitis, while one patient reported grade III proctitis. These findings suggest that regional organs, including the urethra, bladder, and rectum, demonstrate significant radiobiological recovery from prior radiation exposure, enabling tolerance to additional proton scanning beam therapy. Conclusions: This retrospective analysis of 45 patients with recurrent localized prostate cancer treated with salvage proton reirradiation demonstrates favorable outcomes, with a median follow-up of two years. The post-retreatment IPSS scores were comparable to those reported in follow-up studies of initial radiation therapy treatments, indicating stable or improved urinary symptoms compared to the end of initial treatment. These results highlight the efficacy of proton scanning beam therapy in providing effective salvage treatment while minimizing adverse effects on critical organs. The findings also enhance the understanding of radiobiological responses to reirradiation and support proton therapy as a viable option for patients with recurrent localized prostate cancer following previous definitive radiation therapy.

Keywords: prostate salvage radiotherapy, proton therapy, biological radiation tolerance, radiobiology of organs

Procedia PDF Downloads 19
61 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 138
60 Intercultural Initiatives and Canadian Bilingualism

Authors: Muna Shafiq

Abstract:

Growth in international immigration is a reflection of increased migration patterns in Canada and in other parts of the world. Canada continues to promote itself as a bilingual country, yet the bilingual French and English population numbers do not reflect this platform. Each province’s integration policies focus only on second language learning of either English or French. Moreover, since English Canadians outnumber French Canadians, maintaining, much less increasing, English-French bilingualism appears unrealistic. One solution to increasing Canadian bilingualism requires creating intercultural communication initiatives between youth in Quebec and the rest of Canada. Specifically, the focus is on active, experiential learning, where intercultural competencies develop outside traditional classroom settings. The target groups are Generation Y Millennials and Generation Z Linksters, the next generations in the career and parenthood lines. Today, Canada’s education system, like many others, must continually renegotiate lines between programs it offers its immigrant and native communities. While some purists or right-wing nationalists would disagree, the survival of bilingualism in Canada has little to do with reducing immigration. Children and youth immigrants play a valuable role in increasing Canada’s French and English speaking communities. For instance, a focus on more immersion, over core French education programs for immigrant children and youth would not only increase bilingual rates; it would develop meaningful intercultural attachments between Canadians. Moreover, a vigilant increase of funding in French immersion programs is critical, as are new initiatives that focus on experiential language learning for students in French and English language programs. A favorable argument supports the premise that other than French-speaking students in Québec and elsewhere in Canada, second and third generation immigrant students are excellent ambassadors to promote bilingualism in Canada. Most already speak another language at home and understand the value of speaking more than one language in their adopted communities. Their dialogue and participation in experiential language exchange workshops are necessary. If the proposed exchanges take place inter-provincially, the momentum to increase collective regional voices increases. This regional collectivity can unite Canadians differently than nation-targeted initiatives. The results from an experiential youth exchange organized in 2017 between students at the crossroads of Generation Y and Generation Z in Vancouver and Quebec City respectively offer a promising starting point in assessing the strength of bringing together different regional voices to promote bilingualism. Code-switching between standard, international French Vancouver students, learn in the classroom versus more regional forms of Quebec French spoken locally created regional connectivity between students. The exchange was equally rewarding for both groups. Increasing their appreciation for each other’s regional differences allowed them to contribute actively to their social and emotional development. Within a sociolinguistic frame, this proposed model of experiential learning does not focus on hands-on work experience. However, the benefits of such exchanges are as valuable as work experience initiatives developed in experiential education. Students who actively code switch between French and English in real, not simulated contexts appreciate bilingualism more meaningfully and experience its value in concrete terms.

Keywords: experiential learning, intercultural communication, social and emotional learning, sociolinguistic code-switching

Procedia PDF Downloads 139
59 Content Analysis of Gucci’s ‘Blackface’ Sweater Controversy across Multiple Media Platforms

Authors: John Mark King

Abstract:

Beginning on Feb. 7, 2019, the luxury brand, Gucci, was met with a firestorm on social media over fashion runway images of its black balaclava sweater, which covered the bottom half of the face and featured large, shiny bright red lips surrounding the mouth cutout. Many observers on social media and in the news media noted the garment resembled racist “blackface.” This study aimed to measure how items were framed across multiple media platforms. The unit of analysis was any headline or lead paragraph published using the search terms “Gucci” and “sweater” or “jumper” or “balaclava” during the one-year timeframe of Feb. 7, 2019, to Feb. 6, 2020. Limitations included headlines and lead paragraphs published in English and indexed in the Lexis/Nexis database. Independent variables were the nation in which the item was published and the platform (newspapers, blogs, web-based publications, newswires, magazines, or broadcast news). Dependent variables were tone toward Gucci (negative, neutral or positive) and frame (blackface/racism/racist, boycott/celebrity boycott, sweater/balaclava/jumper/fashion, apology/pulling the product/diversity initiatives by Gucci or frames unrelated to the controversy but still involving Gucci sweaters) and word count. Two coders achieved 100% agreement on all variables except tone (94.2%) and frame (96.3%). The search yielded 276 items published from 155 sources in 18 nations. The tone toward Gucci during this period was negative (69.9%). Items that were neutral (16.3%) or positive (13.8%) toward the brand were overwhelmingly related to items about other Gucci sweaters worn by celebrities or fashion reviews of other Gucci sweaters. The most frequent frame was apology/pulling the product/diversity initiatives by Gucci (35.5%). The tone was most frequently negative across all continents, including the Middle East (83.3% negative), Asia (81.8%), North America (76.6%), Australia/New Zealand (66.7%), and Europe (59.8%). Newspapers/magazines/newswires/broadcast news transcripts (72.4%) were more negative than blogs/web-based publications (63.6%). The most frequent frames used by newspapers/magazines/newswires/broadcast news transcripts were apology/pulling the product/diversity initiatives by Gucci (38.7%) and blackface/racism/racist (26.1%). Blogs/web-based publications most frequently used frames unrelated to the controversial garment, but about other Gucci sweaters (42.9%) and apology/pulling the product/diversity initiatives by Gucci (27.3%). Sources in Western nations (34.7%) and Eastern nations (47.1%) most frequently used the frame of apology/pulling the product/diversity initiatives by Gucci. Mean word count was higher for negative items (583.58) than positive items (404.76). Items framed as blackface/racism/racist or boycott/celebrity boycott had higher mean word count (668.97) than items framed as sweater/balaclava/jumper/fashion or apology/pulling the product/diversity initiatives by Gucci (498.22). The author concluded that during the year-long period, Gucci’s image was likely damaged by the release of the garment at the center of the controversy due to near-universally negative items published, but Gucci’s apology/pulling the product off the market/diversity initiatives by Gucci and items about other Gucci sweaters worn by celebrities or fashion reviews of other Gucci sweaters were the most common frames across multiple media platforms, which may have mitigated the damage to the brand.

Keywords: Blackface, branding, Gucci, media framing

Procedia PDF Downloads 149
58 Comparative Characteristics of Bacteriocins from Endemic Lactic Acid Bacteria

Authors: K. Karapetyan, F. Tkhruni, A. Aghajanyan, T. S. Balabekyan, L. Arstamyan

Abstract:

Introduction: Globalization of the food supply has created the conditions favorable for the emergence and spread of food-borne and especially dangerous pathogens (EDP) in developing countries. The fresh-cut fruit and vegetable industry is searching for alternatives to replace chemical treatments with biopreservative approaches that ensure the safety of the processed foods product. Antimicrobial compounds of lactic acid bacteria (LAB) possess bactericidal or bacteriostatic activity against intestinal pathogens, spoilage organisms and food-borne pathogens such as Listeria monocytogenes, Staphylococcus aureus and Salmonella. Endemic strains of LAB were isolated. The strains, showing broad spectrum of antimicrobial activity against food spoiling microorganisms, were selected. The genotyping by 16S rRNA sequencing, GS-PCR, RAPD PCR methods showed that they were presented by Lactobacillus rhamnosus109, L.plantarum 65, L.plantarum 66 and Enterococcus faecium 64 species. LAB are deposited in "Microbial Depository Center" (MDC) SPC "Armbiotechnology". Methods: LAB strains were isolated from different dairy products from rural households from the highland regions of Armenia. Serially diluted samples were spread on MRS (Merck, Germany) and hydrolyzed milk agar (1,2 % w/v). Single colonies from each LAB were individually inoculated in liquid MRS medium and incubated at 37oC for 24 hours. Culture broth with biomass was centrifuged at 10,000 g during 20 min for obtaining of cell free culture broth (CFC). The antimicrobial substances from CFC broth were purified by the combination of adsorption-desorption and ion-exchange chromatography methods. Separation of bacteriocins was performed using a HPLC method on "Avex ODS" C18 column. Mass analysis of peptides recorded on the device API 4000 in the electron ionization mode. The spot-on-lawn method on the test culture plated in the solid medium was applied. The antimicrobial activity is expressed in arbitrary units (AU/ml). Results. Purification of CFC broth of LAB allowed to obtain partially purified antimicrobial preparations which contains bacteriocins with broad spectrum of antimicrobial activity. Investigation of their main biochemical properties shown, that inhibitory activity of preparations is partially reduced after treatment with proteinase K, trypsin, pepsin, suggesting a proteinaceous nature of bacteriocin-like substances containing in CFC broth. Preparations preserved their activity after heat treatment (50-121 oC, 20 min) and were stable in the pH range 3–8. The results of SDS PAAG electrophoresis show that L.plantarum 66 and Ent.faecium 64 strains have one bacteriocin (BCN) with maximal antimicrobial activity with approximate molecular weight 2.0-3.0 kDa. From L.rhamnosus 109 two BCNs were obtained. Mass spectral analysis indicates that these bacteriocins have peptide bonds and molecular weight of BCN 1 and BCN 2 are approximately 1.5 kDa and 700 Da. Discussion: Thus, our experimental data shown, that isolated endemic strains of LAB are able to produce bacteriocins with high and different inhibitory activity against broad spectrum of microorganisms of different taxonomic group, such as Salmonella sp., Esherichia coli, Bacillus sp., L.monocytogenes, Proteus mirabilis, Staph. aureus, Ps. aeruginosa. Obtained results proved the perspectives for use of endemic strains in the preservation of foodstuffs. Acknowledgments: This work was realized with financial support of the Project Global Initiatives for Preliferation Prevention (GIPP) T2- 298, ISTC A-1866.

Keywords: antimicrobial activity, bacteriocins, endemic strains, food safety

Procedia PDF Downloads 562
57 Isolation of Bacterial Species with Potential Capacity for Siloxane Removal in Biogas Upgrading

Authors: Ellana Boada, Eric Santos-Clotas, Alba Cabrera-Codony, Maria Martin, Lluis Baneras, Frederic Gich

Abstract:

Volatile methylsiloxanes (VMS) are a group of manmade silicone compounds widely used in household and industrial applications that end up on the biogas produced through the anaerobic digestion of organic matter in landfills and wastewater treatment plants. The presence of VMS during the biogas energy conversion can cause damage on the engines, reducing the efficiency of this renewable energy source. Non regenerative adsorption onto activated carbon is the most widely used technology to remove siloxanes from biogas, while new trends point out that biotechnology offers a low-cost and environmentally friendly alternative to conventional technologies. The first objective of this research was to enrich, isolate and identify bacterial species able to grow using siloxane molecules as a sole carbon source: anoxic wastewater sludge was used as initial inoculum in liquid anoxic enrichments, adding D4 (as representative siloxane compound) previously adsorbed on activated carbon. After several months of acclimatization, liquid enrichments were plated onto solid media containing D4 and thirty-four bacterial isolates were obtained. 16S rRNA gene sequencing allowed the identification of strains belonging to the following species: Ciceribacter lividus, Alicycliphilus denitrificans, Pseudomonas aeruginosa and Pseudomonas citronellolis which are described to be capable to degrade toxic volatile organic compounds. Kinetic assays with 8 representative strains revealed higher cell growth in the presence of D4 compared to the control. Our second objective was to characterize the community composition and diversity of the microbial community present in the enrichments and to elucidate whether the isolated strains were representative members of the community or not. DNA samples were extracted, the 16S rRNA gene was amplified (515F & 806R primer pair), and the microbiome analyzed from sequences obtained with a MiSeq PE250 platform. Results showed that the retrieved isolates only represented a minor fraction of the microorganisms present in the enrichment samples, which were represented by Alpha, Beta, and Gamma proteobacteria as dominant groups in the category class thus suggesting that other microbial species and/or consortia may be important for D4 biodegradation. These results highlight the need of additional protocols for the isolation of relevant D4 degraders. Currently, we are developing molecular tools targeting key genes involved in siloxane biodegradation to identify and quantify the capacity of the isolates to metabolize D4 in batch cultures supplied with a synthetic gas stream of air containing 60 mg m⁻³ of D4 together with other volatile organic compounds found in the biogas mixture (i.e. toluene, hexane and limonene). The isolates were used as inoculum in a biotrickling filter containing lava rocks and activated carbon to assess their capacity for siloxane removal. Preliminary results of biotrickling filter performance showed 35% of siloxane biodegradation in a contact time of 14 minutes, denoting that biological siloxane removal is a promising technology for biogas upgrading.

Keywords: bacterial cultivation, biogas upgrading, microbiome, siloxanes

Procedia PDF Downloads 259
56 Challenging Airway Management for Tracheal Compression Due to a Rhabdomyosarcoma

Authors: Elena Parmentier, Henrik Endeman

Abstract:

Introduction: Large mediastinal masses often present with diagnostic and clinical challenges due to compression of the respiratory and hemodynamic system. We present a case of a mediastinal mass with symptomatic mechanical compression of the trachea, resulting in challenging airway management. Methods: We present a case of 66-year-old male, complaining of progressive dysphagia. Initial esophagogastroscopy revealed a stenosis secondary to external compression, biopsies were inconclusive. Additional CT scan showed a large mediastinal mass of unknown origin, situated between the vertebrae and esophagus. Symptoms progressed and patient developed dyspnea and stridor. A new CT showed quick growth of the mass with compression of the trachea, subglottic to just above the carina. A tracheal covered stent was successfully placed. Endobronchial ultrasound revealed a large irregular mass without tracheal invasion, biopsies were taken. 4 days after stent placement, the patients’ condition deteriorated with worsening of stridor, dyspnea and desaturation. Migration of the tracheal stent into the right main bronchus was seen on chest X ray, with obstruction of the left main bronchus and secondary atelectasis. Different methods have been described in the literature for tracheobronchial stent removal (surgical, endoscopic, fluoroscopyguided), our first choice in this case was flexible bronchoscopy. However, this revealed tracheal compression above the migrated stent and passage of the scope occurred impossible. Patient was admitted to the ICU, high-flow nasal oxygen therapy was started and the situation stabilized, giving time for extensive assessment and preparation of the airway management approach. Close cooperation between the intensivist, pulmonologist, anesthesiologist and otorhinolaryngologist was essential. Results: In case of sudden deterioration, a protocol for emergency situations was made. Given the increased risk of additional tracheal compression after administration of neuromuscular blocking agents, an approach with awake fiberoptic intubation maintaining spontaneous ventilation was proposed. However, intubation without retrieval of the tracheal stent was found undesirable due to expected massive shunting over the left atelectatic lung. As rescue option, assistance of extracorporeal circulation was considered and perfusionist was kept on standby. The patient stayed stable and was transferred to the operating theatre. High frequency jet ventilation under general anesthesia resulted in desaturations up to 50%, making rigid bronchoscopy impossible. Subsequently an endotracheal tube size 8 could be placed successfully and the stent could be retrieved via bronchoscopy over (and with) the tube, after which the patient was reintubated. Finally, a tracheostomy (Shiley™ Tracheostomy Tube With Cuff, size 8) was placed, fiberoptic control showed a patent airway. Patient was readmitted to the ICU and could be quickly weaned of the ventilator. Pathology was positive for rhabdomyosarcoma, without indication for systemic therapy. Extensive surgery (laryngectomy, esophagectomy) was suggested, but patient refused and palliative care was started. Conclusion: Due to meticulous planning in an interdisciplinary team, we showed a successful airway management approach in this complicated case of critical airway compression secondary to a rare rhabdomyosarcoma, complicated by tracheal stent migration. Besides presenting our thoughts and considerations, we support exploring other possible approaches of this specific clinical problem.

Keywords: airway management, rhabdomyosarcoma, stent displacement, tracheal stenosis

Procedia PDF Downloads 106
55 Guard@Lis: Birdwatching Augmented Reality Mobile Application

Authors: Jose A. C. Venancio, Alexandrino J. M. Goncalves, Anabela Marto, Nuno C. S. Rodrigues, Rita M. T. Ascenso

Abstract:

Nowadays, it is common to find people who are concerned about getting away from the everyday life routine, looking forward to outcome well-being and pleasant emotions. Trying to disconnect themselves from the usual places of work and residence, they pursue different places, such as tourist destinations, aiming to have unexpected experiences. In order to make this exploration process easier, cities and tourism agencies seek new opportunities and solutions, creating routes with diverse cultural landmarks, including natural landscapes and historic buildings. These offers frequently aspire to the preservation of the local patrimony. In nature and wildlife, birdwatching is an activity that has been increasing, both in cities and in the countryside. This activity seeks to find, observe and identify the diversity of birds that live permanently or temporarily in these places, and it is usually supported by birdwatching guides. Leiria (Portugal) is a well-known city, presenting several historical and natural landmarks, like the Lis river and the castle where King D. Dinis lived in the 13th century. Along the Lis River, a conservation process was carried out and a pedestrian route was created (Polis project). This is considered an excellent spot for birdwatching, especially for the gray heron (Ardea cinerea) and for the kingfisher (Alcedo atthis). There is also a route through the city, from the riverside to the castle, which encloses a characterized variety of species, such as the barn swallow (Hirundo rustica), known for passing through different seasons of the year. Birdwatching is sometimes a difficult task since it is not always possible to see all bird species that inhabit a given place. For this reason, a need to create a technological solution was found to ease this activity. This project aims to encourage people to learn about the various species of birds that live along the Lis River and to promote the preservation of nature in a conscious way. This work is being conducted in collaboration with Leiria Municipal Council and with the Environmental Interpretation Centre. It intends to show the majesty of the Lis River, a place visited daily by several people, such as children and families, who use it for didactic and recreational activities. We are developing a mobile multi-platform application (Guard@Lis) that allows bird species to be observed along a given route, using representative digital 3D models through the integration of augmented reality technologies. Guard@Lis displays a route with points of interest for birdwatching and a list of species for each point of interest, along with scientific information, images and sounds for every species. For some birds, to ensure their observation, the user can watch them in loco, in their real and natural environment, with their mobile device by means of augmented reality, giving the sensation of presence of these birds, even if they cannot be seen in that place at that moment. The augmented reality feature is being developed with Vuforia SDK, using a hybrid approach to recognition and tracking processes, combining marks and geolocation techniques. This application proposes routes and notifies users with alerts for the possibility of viewing models of augmented reality birds. The final Guard@Lis prototype will be tested by volunteers in-situ.

Keywords: augmented reality, birdwatching route, mobile application, nature tourism, watch birds using augmented reality

Procedia PDF Downloads 178
54 Integration of Building Information Modeling Framework for 4D Constructability Review and Clash Detection Management of a Sewage Treatment Plant

Authors: Malla Vijayeta, Y. Vijaya Kumar, N. Ramakrishna Raju, K. Satyanarayana

Abstract:

Global AEC (architecture, engineering, and construction) industry has been coined as one of the most resistive domains in embracing technology. Although this digital era has been inundated with software tools like CAD, STADD, CANDY, Microsoft Project, Primavera etc. the key stakeholders have been working in siloes and processes remain fragmented. Unlike the yesteryears’ simpler project delivery methods, the current projects are of fast-track, complex, risky, multidisciplinary, stakeholder’s influential, statutorily regulative etc. pose extensive bottlenecks in preventing timely completion of projects. At this juncture, a paradigm shift surfaced in construction industry, and Building Information Modeling, aka BIM, has been a panacea to bolster the multidisciplinary teams’ cooperative and collaborative work leading to productive, sustainable and leaner project outcome. Building information modeling has been integrative, stakeholder engaging and centralized approach in providing a common platform of communication. A common misconception that BIM can be used for building/high rise projects in Indian Construction Industry, while this paper discusses of the implementation of BIM processes/methodologies in water and waste water industry. It elucidates about BIM 4D planning and constructability reviews of a Sewage Treatment Plant in India. Conventional construction planning and logistics management involves a blend of experience coupled with imagination. Even though the excerpts or judgments or lessons learnt gained from veterans might be predictive and helpful, but the uncertainty factor persists. This paper shall delve about the case study of real time implementation of BIM 4D planning protocols for one of the Sewage Treatment Plant of Dravyavati River Rejuvenation Project in India and develops a Time Liner to identify logistics planning and clash detection. With this BIM processes, we shall find that there will be significant reduction of duplication of tasks and reworks. Also another benefit achieved will be better visualization and workarounds during conception stage and enables for early involvement of the stakeholders in the Project Life cycle of Sewage Treatment Plant construction. Moreover, we have also taken an opinion poll of the benefits accrued utilizing BIM processes versus traditional paper based communication like 2D and 3D CAD tools. Thus this paper concludes with BIM framework for Sewage Treatment Plant construction which will achieve optimal construction co-ordination advantages like 4D construction sequencing, interference checking, clash detection checking and resolutions by primary engagement of all key stakeholders thereby identifying potential risks and subsequent creation of risk response strategies. However, certain hiccups like hesitancy in adoption of BIM technology by naïve users and availability of proficient BIM trainers in India poses a phenomenal impediment. Hence the nurture of BIM processes from conception, construction and till commissioning, operation and maintenance along with deconstruction of a project’s life cycle is highly essential for Indian Construction Industry in this digital era.

Keywords: integrated BIM workflow, 4D planning with BIM, building information modeling, clash detection and visualization, constructability reviews, project life cycle

Procedia PDF Downloads 122
53 Organic Tuber Production Fosters Food Security and Soil Health: A Decade of Evidence from India

Authors: G. Suja, J. Sreekumar, A. N. Jyothi, V. S. Santhosh Mithra

Abstract:

Worldwide concerns regarding food safety, environmental degradation and threats to human health have generated interest in alternative systems like organic farming. Tropical tuber crops, cassava, sweet potato, yams, and aroids are food-cum-nutritional security-cum climate resilient crops. These form stable or subsidiary food for about 500 million global population. Cassava, yams (white yam, greater yam, and lesser yam) and edible aroids (elephant foot yam, taro, and tannia) are high energy tuberous vegetables with good taste and nutritive value. Seven on-station field experiments at ICAR-Central Tuber Crops Research Institute, Thiruvananthapuram, India and seventeen on-farm trials in three districts of Kerala, were conducted over a decade (2004-2015) to compare the varietal response, yield, quality and soil properties under organic vs conventional system in these crops and to develop a learning system based on the data generated. The industrial, as well as domestic varieties of cassava, the elite and local varieties of elephant foot yam and taro and the three species of Dioscorea (yams), were on a par under both systems. Organic management promoted yield by 8%, 20%, 9%, 11% and 7% over conventional practice in cassava, elephant foot yam, white yam, greater yam and lesser yam respectively. Elephant foot yam was the most responsive to organic management followed by yams and cassava. In taro, slight yield reduction (5%) was noticed under organic farming with almost similar tuber quality. The tuber quality was improved with higher dry matter, starch, crude protein, K, Ca and Mg contents. The anti-nutritional factors, oxalate content in elephant foot yam and cyanogenic glucoside content in cassava were lowered by 21 and 12.4% respectively. Organic plots had significantly higher water holding capacity, pH, available K, Fe, Mn and Cu, higher soil organic matter, available N, P, exchangeable Ca and Mg, dehydrogenase enzyme activity and microbial count. Organic farming scored significantly higher soil quality index (1.93) than conventional practice (1.46). The soil quality index was driven by water holding capacity, pH and available Zn followed by soil organic matter. Organic management enhanced net profit by 20-40% over chemical farming. A case in point is the cost-benefit analysis in elephant foot yam which indicated that the net profit was 28% higher and additional income of Rs. 47,716 ha-1 was obtained due to organic farming. Cost-effective technologies were field validated. The on-station technologies developed were validated and popularized through on-farm trials in 10 sites (5 ha) under National Horticulture Mission funded programme in elephant foot yam and seven sites in yams and taro. The technologies are included in the Package of Practices Recommendations for crops of Kerala Agricultural University. A learning system developed using artificial neural networks (ANN) predicted the performance of elephant foot yam organic system. Use of organically produced seed materials, seed treatment in cow-dung, neem cake, bio-inoculant slurry, farmyard manure incubated with bio-inoculants, green manuring, use of neem cake, bio-fertilizers and ash formed the strategies for organic production. Organic farming is an eco-friendly management strategy that enables 10-20% higher yield, quality tubers and maintenance of soil health in tuber crops.

Keywords: eco-agriculture, quality, root crops, healthy soil, yield

Procedia PDF Downloads 338
52 Near-Peer Mentoring/Curriculum and Community Enterprise for Environmental Restoration Science

Authors: Lauren B. Birney

Abstract:

The BOP-CCERS (Billion Oyster Project- Curriculum and Community Enterprise for Restoration Science) Near-Peer Mentoring Program provides the long-term (five-year) support network to motivate and guide students toward restoration science-based CTE pathways. Students are selected from middle schools with actively participating BOP-CCERS teachers. Teachers will nominate students from grades 6-8 to join cohorts of between 10 and 15 students each. Cohorts are comprised primarily of students from the same school in order to facilitate mentors' travel logistics as well as to sustain connections with students and their families. Each cohort is matched with an exceptional undergraduate or graduate student, either a BOP research associate or STEM mentor recruited from collaborating City University of New York (CUNY) partner programs. In rare cases, an exceptional high school junior or senior may be matched with a cohort in addition to a research associate or graduate student. In no case is a high school student or minor be placed individually with a cohort. Mentors meet with students at least once per month and provide at least one offsite field visit per month, either to a local STEM Hub or research lab. Keeping with its five-year trajectory, the near-peer mentoring program will seek to retain students in the same cohort with the same mentor for the full duration of middle school and for at least two additional years of high school. Upon reaching the final quarter of 8th grade, the mentor will develop a meeting plan for each individual mentee. The mentee and the mentor will be required to meet individually or in small groups once per month. Once per quarter, individual meetings will be substituted for full cohort professional outings. The mentor will organize the entire cohort on a field visit or educational workshop with a museum or aquarium partner. In addition to the mentor-mentee relationship, each participating student will also be asked to conduct and present his or her own BOP field research. This research is ideally carried out with the support of the students’ regular high school STEM subject teacher; however, in cases where the teacher or school does not permit independent study, the student will be asked to conduct the research on an extracurricular basis. Near-peer mentoring affects students’ social identities and helps them to connect to role models from similar groups, ultimately giving them a sense of belonging. Qualitative and quantitative analytics were performed throughout the study. Interviews and focus groups also ensued. Additionally, an external evaluator was utilized to ensure project efficacy, efficiency, and effectiveness throughout the entire project. The BOP-CCERS Near Peer Mentoring program is a peer support network in which high school students with interest or experience in BOP (Billion Oyster Project) topics and activities (such as classroom oyster tanks, STEM Hubs, or digital platform research) provide mentorship and support for middle school or high school freshmen mentees. Peer mentoring not only empowers those students being taught but also increases the content knowledge and engagement of mentors. This support provides the necessary resources, structure, and tools to assist students in finding success.

Keywords: STEM education, environmental science, citizen science, near peer mentoring

Procedia PDF Downloads 92
51 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 232
50 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 206