Search results for: monitoring interface
81 Blood Thicker Than Water: A Case Report on Familial Ovarian Cancer
Authors: Joanna Marie A. Paulino-Morente, Vaneza Valentina L. Penolio, Grace Sabado
Abstract:
Ovarian cancer is extremely hard to diagnose in its early stages, and those afflicted at the time of diagnosis are typically asymptomatic and in the late stages of the disease, with metastasis to other organs. Ovarian cancers often occur sporadically, with only 5% associated with hereditary mutations. Mutations in the BRCA1 and BRCA2 tumor suppressor genes have been found to be responsible for the majority of hereditary ovarian cancers. One type of ovarian tumor is Malignant Mixed Mullerian Tumor (MMMT), which is a very rare and aggressive type, accounting for only 1% of all ovarian cancers. Reported is a case of a 43-year-old G3P3 (3003), who came into our institution due to a 2-month history of difficulty of breathing. Family history reveals that her eldest and younger sisters both died of ovarian malignancy, with her younger sister having a histopathology report of endometrioid ovarian carcinoma, left ovary stage IIIb. She still has 2 asymptomatic sisters. Physical examination pointed to pleural effusion of right lung, and presence of bilateral ovarian new growth, which had a Sassone score of 13. Admitting Diagnosis was G3P3 (3003), Ovarian New Growth, bilateral, Malignant; Pleural effusion secondary to malignancy. BRCA was requested to establish a hereditary mutation; however, the patient had no funds. Once the patient was stabilized, TAHBSO with surgical staging was performed. Intraoperatively, the pelvic cavity was occupied by firm, irregularly shaped ovaries, with a colorectal metastasis. Microscopic sections from both ovaries and the colorectal metastasis had pleomorphic tumor cells lined by cuboidal to columnar epithelium exhibiting glandular complexity, displaying nuclear atypia and increased nuclear-cytoplasmic ratio, which are infiltrating the stroma, consistent with the features of Malignant Mixed Mullerian Tumor, since MMMT is composed histologically of malignant epithelial and sarcomatous elements. In conclusion, discussed is the clinic-pathological feature of a patient with primary ovarian Malignant Mixed Mullerian Tumor, a rare malignancy comprising only 1% of all ovarian neoplasms. Also, by understanding the hereditary ovarian cancer syndromes and its relation to this patient, it cannot be overemphasized that a comprehensive family history is really fundamental for early diagnosis. The familial association of the disease, given that the patient has two sisters who were diagnosed with an advanced stage of ovarian cancer and succumbed to the disease at a much earlier age than what is reported in the general population, points to a possible hereditary syndrome which occurs in only 5% of ovarian neoplasms. In a low-resource setting, being in a third world country, the following will be recommended for monitoring and/or screening women who are at high risk for developing ovarian cancer, such as the remaining sisters of the patient: 1) Physical examination focusing on the breast, abdomen, and rectal area every 6 months. 2) Transvaginal sonography every 6 months. 3) Mammography annually. 4) CA125 for postmenopausal women. 5) Genetic testing for BRCA1 and BRCA2 will be reserved for those who are financially capable.Keywords: BRCA, hereditary breast-ovarian cancer syndrome, malignant mixed mullerian tumor, ovarian cancer
Procedia PDF Downloads 28980 Upflow Anaerobic Sludge Blanket Reactor Followed by Dissolved Air Flotation Treating Municipal Sewage
Authors: Priscila Ribeiro dos Santos, Luiz Antonio Daniel
Abstract:
Inadequate access to clean water and sanitation has become one of the most widespread problems affecting people throughout the developing world, leading to an unceasing need for low-cost and sustainable wastewater treatment systems. The UASB technology has been widely employed as a suitable and economical option for the treatment of sewage in developing countries, which involves low initial investment, low energy requirements, low operation and maintenance costs, high loading capacity, short hydraulic retention times, long solids retention times and low sludge production. Whereas dissolved air flotation process is a good option for the post-treatment of anaerobic effluents, being capable of producing high quality effluents in terms of total suspended solids, chemical oxygen demand, phosphorus, and even pathogens. This work presents an evaluation and monitoring, over a period of 6 months, of one compact full-scale system with this configuration, UASB reactors followed by dissolved air flotation units (DAF), operating in Brazil. It was verified as a successful treatment system, and an issue of relevance since dissolved air flotation process treating UASB reactor effluents is not widely encompassed in the literature. The study covered the removal and behavior of several variables, such as turbidity, total suspend solids (TSS), chemical oxygen demand (COD), Escherichia coli, total coliforms and Clostridium perfringens. The physicochemical variables were analyzed according to the protocols established by the Standard Methods for Examination of Water and Wastewater. For microbiological variables, such as Escherichia coli and total coliforms, it was used the “pour plate” technique with Chromocult Coliform Agar (Merk Cat. No.1.10426) serving as the culture medium, while the microorganism Clostridium perfringens was analyzed through the filtering membrane technique, with the Ágar m-CP (Oxoid Ltda, England) serving as the culture medium. Approximately 74% of total COD was removed in the UASB reactor, and the complementary removal done during the flotation process resulted in 88% of COD removal from the raw sewage, thus the initial concentration of COD of 729 mg.L-1 decreased to 87 mg.L-1. Whereas, in terms of particulate COD, the overall removal efficiency for the whole system was about 94%, decreasing from 375 mg.L-1 in raw sewage to 29 mg.L-1 in final effluent. The UASB reactor removed on average 77% of the TSS from raw sewage. While the dissolved air flotation process did not work as expected, removing only 30% of TSS from the anaerobic effluent. The final effluent presented an average concentration of 38 mg.L-1 of TSS. The turbidity was significantly reduced, leading to an overall efficiency removal of 80% and a final turbidity of 28 NTU.The treated effluent still presented a high concentration of fecal pollution indicators (E. coli, total coliforms, and Clostridium perfringens), showing that the system did not present a good performance in removing pathogens. Clostridium perfringens was the organism which suffered the higher removal by the treatment system. The results can be considered satisfactory for the physicochemical variables, taking into account the simplicity of the system, besides that, it is necessary a post-treatment to improve the microbiological quality of the final effluent.Keywords: dissolved air flotation, municipal sewage, UASB reactor, treatment
Procedia PDF Downloads 33179 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 9378 Analysis of Complex Business Negotiations: Contributions from Agency-Theory
Authors: Jan Van Uden
Abstract:
The paper reviews classical agency-theory and its contributions to the analysis of complex business negotiations and gives an approach for the modification of the basic agency-model in order to examine the negotiation specific dimensions of agency-problems. By illustrating fundamental potentials for the modification of agency-theory in context of business negotiations the paper highlights recent empirical research that investigates agent-based negotiations and inter-team constellations. A general theoretical analysis of complex negotiation would be based on a two-level approach. First, the modification of the basic agency-model in order to illustrate the organizational context of business negotiations (i.e., multi-agent issues, common-agencies, multi-period models and the concept of bounded rationality). Second, the application of the modified agency-model on complex business negotiations to identify agency-problems and relating areas of risk in the negotiation process. The paper is placed on the first level of analysis – the modification. The method builds on the one hand on insights from behavior decision research (BRD) and on the other hand on findings from agency-theory as normative directives to the modification of the basic model. Through neoclassical assumptions concerning the fundamental aspects of agency-relationships in business negotiations (i.e., asymmetric information, self-interest, risk preferences and conflict of interests), agency-theory helps to draw solutions on stated worst-case-scenarios taken from the daily negotiation routine. As agency-theory is the only universal approach able to identify trade-offs between certain aspects of economic cooperation, insights obtained provide a deeper understanding of the forces that shape business negotiation complexity. The need for a modification of the basic model is illustrated by highlighting selected issues of business negotiations from agency-theory perspective: Negotiation Teams require a multi-agent approach under the condition that often decision-makers as superior-agents are part of the team. The diversity of competences and decision-making authority is a phenomenon that overrides the assumptions of classical agency-theory and varies greatly in context of certain forms of business negotiations. Further, the basic model is bound to dyadic relationships preceded by the delegation of decision-making authority and builds on a contractual created (vertical) hierarchy. As a result, horizontal dynamics within the negotiation team playing an important role for negotiation success are therefore not considered in the investigation of agency-problems. Also, the trade-off between short-term relationships within the negotiation sphere and the long-term relationships of the corporate sphere calls for a multi-period perspective taking into account the sphere-specific governance-mechanisms already established (i.e., reward and monitoring systems). Within the analysis, the implementation of bounded rationality is closely related to findings from BRD to assess the impact of negotiation behavior on underlying principal-agent-relationships. As empirical findings show, the disclosure and reservation of information to the agent affect his negotiation behavior as well as final negotiation outcomes. Last, in context of business negotiations, asymmetric information is often intended by decision-makers acting as superior-agents or principals which calls for a bilateral risk-approach to agency-relations.Keywords: business negotiations, agency-theory, negotiation analysis, interteam negotiations
Procedia PDF Downloads 14077 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components
Authors: Francesca Gullo, Paola Palmero, Massimo Messori
Abstract:
Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites
Procedia PDF Downloads 5576 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 11675 Phage Therapy of Staphylococcal Pyoderma in Dogs
Authors: Jiri Nepereny, Vladimir Vrzal
Abstract:
Staphylococcus intermedius/pseudintermedius bacteria are commonly found on the skin of healthy dogs and can cause pruritic skin diseases under certain circumstances (trauma, allergy, immunodeficiency, ectoparasitosis, endocrinological diseases, glucocorticoid therapy, etc.). These can develop into complicated superficial or deep pyoderma, which represent a large group of problematic skin diseases in dogs. These are predominantly inflammations of a secondary nature, associated with the occurrence of coagulase-positive Staphylococcus spp. A major problem is increased itching, which greatly complicates the healing process. The aim of this work is to verify the efficacy of the developed preparation Bacteriophage SI (Staphylococcus intermedius). The tested preparation contains a lysate of bacterial cells of S. intermedius host culture including culture medium and live virions of specific phage. Sodium Merthiolate is added as a preservative in a safe concentration. Validation of the efficacy of the product was demonstrated by monitoring the therapeutic effect after application to indicated cases from clinical practice. The indication for inclusion of the patient into the trial was an adequate history and clinical examination accompanied by sample collection for bacteriological examination and isolation of the specific causative agent. Isolate identification was performed by API BioMérieux identification system (API ID 32 STAPH) and rep-PCR typing. The suitability of therapy for a specific case was confirmed by in vitro testing of the lytic ability of the bacteriophage to lyse the specific isolate = formation of specific plaques on the culture isolate on the surface of the solid culture medium. So far, a total of 32 dogs of different sexes, ages and breed affiliations with different symptoms of staphylococcal dermatitis have been included in the testing. Their previous therapy consisted of more or less successful systemic or local application of broad-spectrum antibiotics. The presence of S. intermedius/pseudintermedius has been demonstrated in 26 cases. The isolates were identified as a S. pseudintermedius, in all cases. Contaminant bacterial microflora was always present in the examined samples. The test product was applied subcutaneously in gradually increasing doses over a period of 1 month. After improvement in health status, maintenance therapy was followed by application of the product once a week for 3 months. Adverse effects associated with the administration of the product (swelling at the site of application) occurred in only 2 cases. In all cases, there was a significant reduction in clinical signs (healing of skin lesions and reduction of inflammation) after therapy and an improvement in the well-being of the treated animals. A major problem in the treatment of pyoderma is the frequent resistance of the causative agents to antibiotics, especially the increasing frequency of multidrug-resistant and methicillin-resistant S. pseudintermedius (MRSP) strains. Specific phagolysate using for the therapy of these diseases could solve this problem and to some extent replace or reduce the use of antibiotics, whose frequent and widespread application often leads to the emergence of resistance. The advantage of the therapeutic use of bacteriophages is their bactericidal effect, high specificity and safety. This work was supported by Project FV40213 from Ministry of Industry and Trade, Czech Republic.Keywords: bacteriophage, pyoderma, staphylococcus spp, therapy
Procedia PDF Downloads 17374 Effect of Climate Change on Rainfall Induced Failures for Embankment Slopes in Timor-Leste
Authors: Kuo Chieh Chao, Thishani Amarathunga, Sangam Shrestha
Abstract:
Rainfall induced slope failures are one of the most damaging and disastrous natural hazards which occur frequently in the world. This type of sliding mainly occurs in the zone above the groundwater level in silty/sandy soils. When the rainwater begins to infiltrate into the vadose zone of the soil, the negative pore-water pressure tends to decrease and reduce the shear strength of soil material. Climate change has resulted in excessive and unpredictable rainfall in all around the world, resulting in landslides with dire consequences to human lives and infrastructure. Such problems could be overcome by examining in detail the causes for such slope failures and recommending effective repair plans for vulnerable locations by considering future climatic change. The selected area for this study is located in the road rehabilitation section from Maubara to Mota Ain road in Timor-Leste. Slope failures and cracks have occurred in 2013 and after repairs reoccurred again in 2017 subsequent to heavy rains. Both observed and future predicted climate data analyses were conducted to understand the severe precipitation conditions in past and future. Observed climate data were collected from NOAA global climate data portal. CORDEX data portal was used to collect Regional Climate Model (RCM) future predicted climate data. Both observed and RCM data were extracted to location-based data using ArcGIS Software. Linear scaling method was used for the bias correction of future data and bias corrected climate data were assigned to GeoStudio Software. Precipitations of wet seasons (December to March ) in 2007 to 2013 is higher than 2001-2006 period and it is more than nearly 40% higher precipitation than usual monthly average precipitation of 160mm.The results of seepage analyses which were carried out using SEEP/W model with observed climate, clearly demonstrated that the pore water pressure within the fill slope was significantly increased due to the increase of the infiltration during the wet season of 2013.One main Regional Climate Models (RCM) was analyzed in order to predict future climate variation under two Representative Concentration Pathways (RCPs).In the projected period of 76 years ahead from 2014, shows that the amount of precipitation is considerably getting higher in the future in both RCP 4.5 and RCP 8.5 emission scenarios. Critical pore water pressure conditions during 2014-2090 were used in order to recommend appropriate remediation methods. Results of slope stability analyses indicated that the factor of safety of the fill slopes was reduced from 1.226 to 0.793 during the dry season to wet season in 2013.Results of future slope stability which were obtained using SLOPE/W model for the RCP emissions scenarios depict that, the use of tieback anchors and geogrids in slope protection could be effective in increasing the stability of slopes to an acceptable level during the wet seasons. Moreover, methods and procedures like monitoring of slopes showing signs or susceptible for movement and installing surface protections could be used to increase the stability of slopes.Keywords: climate change, precipitation, SEEP/W, SLOPE/W, unsaturated soil
Procedia PDF Downloads 13673 Influence of Dryer Autumn Conditions on Weed Control Based on Soil Active Herbicides
Authors: Juergen Junk, Franz Ronellenfitsch, Michael Eickermann
Abstract:
An appropriate weed management in autumn is a prerequisite for an economically successful harvest in the following year. In Luxembourg oilseed rape, wheat and barley is sown from August until October, accompanied by a chemical weed control with soil active herbicides, depending on the state of the weeds and the meteorological conditions. Based on regular ground and surface water-analysis, high levels of contamination by transformation products of respective herbicide compounds have been found in Luxembourg. The most ideal conditions for incorporating soil active herbicides are single rain events. Weed control may be reduced if application is made when weeds are under drought stress or if repeated light rain events followed by dry spells, because the herbicides tend to bind tightly to the soil particles. These effects have been frequently reported for Luxembourg throughout the last years. In the framework of a multisite long-term field experiment (EFFO) weed monitoring, plants observations and corresponding meteorological measurements were conducted. Long-term time series (1947-2016) from the SYNOP station Findel-Airport (WMO ID = 06590) showed a decrease in the number of days with precipitation. As the total precipitation amount has not significantly changed, this indicates a trend towards rain events with higher intensity. All analyses are based on decades (10-day periods) for September and October of each individual year. To assess the future meteorological conditions for Luxembourg, two different approaches were applied. First, multi-model ensembles from the CORDEX experiments (spatial resolution ~12.5 km; transient projections until 2100) were analysed for two different Representative Concentration Pathways (RCP8.5 and RCP4.5), covering the time span from 2005 until 2100. The multi-model ensemble approach allows for the quantification of the uncertainties and also to assess the differences between the two emission scenarios. Second, to assess smaller scale differences within the country a high resolution model projection using the COSMO-LM model was used (spatial resolution 1.3 km). To account for the higher computational demands, caused by the increased spatial resolution, only 10-year time slices have been simulated (reference period 1991-2000; near future 2041-2050 and far future 2091-2100). Statistically significant trends towards higher air temperatures, +1.6 K for September (+5.3 K far future) and +1.3 K for October (+4.3 K), were predicted for the near future compared to the reference period. Precipitation simultaneously decreased by 9.4 mm (September) and 5.0 mm (October) for the near future and -49 mm (September) and -10 mm (October) in the far future. Beside the monthly values also decades were analyzed for the two future time periods of the CLM model. For all decades of September and October the number of days with precipitation decreased for the projected near and far future. Changes in meteorological variables such as air temperature and precipitation did already induce transformations in weed societies (composition, late-emerging etc.) of arable ecosystems in Europe. Therefore, adaptations of agronomic practices as well as effective weed control strategies must be developed to maintain crop yield.Keywords: CORDEX projections, dry spells, ensembles, weed management
Procedia PDF Downloads 23572 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support
Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz
Abstract:
The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.
Procedia PDF Downloads 12971 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter
Authors: Wesley Preßler, Lucie Schmidt
Abstract:
Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.Keywords: digital transformation, interdisciplinary, maturity model, neighborhood
Procedia PDF Downloads 7870 Green Building Risks: Limits on Environmental and Health Quality Metrics for Contractors
Authors: Erica Cochran Hameen, Bobuchi Ken-Opurum, Mounica Guturu
Abstract:
The United Stated (U.S.) populous spends the majority of their time indoors in spaces where building codes and voluntary sustainability standards provide clear Indoor Environmental Quality (IEQ) metrics. The existing sustainable building standards and codes are aimed towards improving IEQ, health of occupants, and reducing the negative impacts of buildings on the environment. While they address the post-occupancy stage of buildings, there are fewer standards on the pre-occupancy stage thereby placing a large labor population in environments much less regulated. Construction personnel are often exposed to a variety of uncomfortable and unhealthy elements while on construction sites, primarily thermal, visual, acoustic, and air quality related. Construction site power generators, equipment, and machinery generate on average 9 decibels (dBA) above the U.S. OSHA regulations, creating uncomfortable noise levels. Research has shown that frequent exposure to high noise levels leads to chronic physiological issues and increases noise induced stress, yet beyond OSHA no other metric focuses directly on the impacts of noise on contractors’ well-being. Research has also associated natural light with higher productivity and attention span, and lower cases of fatigue in construction workers. However, daylight is not always available as construction workers often perform tasks in cramped spaces, dark areas, or at nighttime. In these instances, the use of artificial light is necessary, yet lighting standards for use during lengthy tasks and arduous activities is not specified. Additionally, ambient air, contaminants, and material off-gassing expelled at construction sites are one of the causes of serious health effects in construction workers. Coupled with extreme hot and cold temperatures for different climate zones, health and productivity can be seriously compromised. This research evaluates the impact of existing green building metrics on construction and risk management, by analyzing two codes and nine standards including LEED, WELL, and BREAM. These metrics were chosen based on the relevance to the U.S. construction industry. This research determined that less than 20% of the sustainability context within the standards and codes (texts) are related to the pre-occupancy building sector. The research also investigated the impact of construction personnel’s health and well-being on construction management through two surveys of project managers and on-site contractors’ perception of their work environment on productivity. To fully understand the risks of limited Environmental and Health Quality metrics for contractors (EHQ) this research evaluated the connection between EHQ factors such as inefficient lighting, on construction workers and investigated the correlation between various site coping strategies for comfort and productivity. Outcomes from this research are three-pronged. The first includes fostering a discussion about the existing conditions of EQH elements, i.e. thermal, lighting, ergonomic, acoustic, and air quality on the construction labor force. The second identifies gaps in sustainability standards and codes during the pre-occupancy stage of building construction from ground-breaking to substantial completion. The third identifies opportunities for improvements and mitigation strategies to improve EQH such as increased monitoring of effects on productivity and health of contractors and increased inclusion of the pre-occupancy stage in green building standards.Keywords: construction contractors, health and well-being, environmental quality, risk management
Procedia PDF Downloads 13269 Big Data Applications for the Transport Sector
Authors: Antonella Falanga, Armando Cartenì
Abstract:
Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, cloud computing, decision-making, mobility demand, transportation
Procedia PDF Downloads 6568 Challenges, Responses and Governance in the Conservation of Forest and Wildlife: The Case of the Aravali Ranges, Delhi NCR
Authors: Shashi Mehta, Krishan Kumar Yadav
Abstract:
This paper presents an overview of issues pertaining to the conservation of the natural environment and factors affecting the coexistence of the forest, wildlife and people. As forests and wildlife together create the basis for economic, cultural and recreational spaces for overall well-being and life-support systems, the adverse impacts of increasing consumerism are only too evident. The IUCN predicts extinction of 41% of all amphibians and 26% of mammals. The major causes behind this threatened extinction are Deforestation, Dysfunctional governance, Climate Change, Pollution and Cataclysmic phenomena. Thus the intrinsic relationship between natural resources and wildlife needs to be understood in totality, not only for the eco-system but for humanity at large. To demonstrate this, forest areas in the Aravalis- the oldest mountain ranges of Asia—falling in the States of Haryana and Rajasthan, have been taken up for study. The Aravalis are characterized by extreme climatic conditions and dry deciduous forest cover on intermittent scattered hills. Extending across the districts of Gurgaon, Faridabad, Mewat, Mahendergarh, Rewari and Bhiwani, these ranges - with village common land on which the entire economy of the rural settlements depends - fall in the state of Haryana. Aravali ranges with diverse fauna and flora near Alwar town of state of Rajasthan also form part of NCR. Once, rich in biodiversity, the Aravalis played an important role in the sustainable co-existence of forest and people. However, with the advent of industrialization and unregulated urbanization, these ranges are facing deforestation, degradation and denudation. The causes are twofold, i.e. the need of the poor and the greed of the rich. People living in and around the Aravalis are mainly poor and eke out a living by rearing live-stock. With shrinking commons, they depend entirely upon these hills for grazing, fuel, NTFP, medicinal plants and even drinking water. But at the same time, the pressure of indiscriminate urbanization and industrialization in these hills fulfils the demands of the rich and powerful in collusion with Government agencies. The functionaries of federal and State Governments play largely a negative role supporting commercial interests. Additionally, planting of a non- indigenous species like prosopis juliflora across the ranges has resulted in the extinction of almost all the indigenous species. The wildlife in the area is also threatened because of the lack of safe corridors and suitable habitat. In this scenario, the participatory role of different stakeholders such as NGOs, civil society and local community in the management of forests becomes crucial not only for conservation but also for the economic wellbeing of the local people. Exclusion of villagers from protection and conservation efforts - be it designing, implementing or monitoring and evaluating could prove counterproductive. A strategy needs to be evolved, wherein Government agencies be made responsible by putting relevant legislation in place along with nurturing and promoting the traditional wisdom and ethics of local communities in the protection and conservation of forests and wild life in the Aravali ranges of States of Haryana and Rajasthan of the National Capital Region, Delhi.Keywords: deforestation, ecosystem, governance, urbanization
Procedia PDF Downloads 32667 Adapting Hazard Analysis and Critical Control Points (HACCP) Principles to Continuing Professional Education
Authors: Yaroslav Pavlov
Abstract:
In the modern world, ensuring quality has become increasingly important in various fields of human activity. One universal approach to quality management, proven effective in the food industry, is the HACCP (Hazard Analysis and Critical Control Points) concept. Based on principles of preventing potential hazards to consumers at all stages of production, from raw materials to the final product, HACCP offers a systematic approach to identifying, assessing risks, and managing critical control points (CCPs). Initially used primarily for food production, it was later effectively adapted to the food service sector. Implementing HACCP provides organizations with a reliable foundation for improving food safety, covering all links in the food chain from producer to consumer, making it an integral part of modern quality management systems. The main principles of HACCP—hazard identification, CCP determination, effective monitoring procedures, corrective actions, regular checks, and documentation—are universal and can be adapted to other areas. The adaptation of the HACCP concept is relevant for continuing professional education (CPE) with certain reservations. Specifically, it is reasonable to abandon the term ‘hazards’ as deviations in CCPs do not pose dangers, unlike in food production. However, the approach through CCP analysis and the use of HACCP's main principles for educational services are promising. This is primarily because it allows for identifying key CCPs based on the value creation model of a specific educational organization and consequently focusing efforts on specific CCPs to manage the quality of educational services. This methodology can be called the Analysis of Critical Points in Educational Services (ACPES). ACPES offers a similar approach to managing the quality of educational services, focusing on preventing and eliminating potential risks that could negatively impact the educational process, learners' achievement of set educational goals, and ultimately lead to students rejecting the organization's educational services. ACPES adapts proven HACCP principles to educational services, enhancing quality management effectiveness and student satisfaction. ACPES includes identifying potential problems at all stages of the educational process, from initial interest to graduation and career development. In ACPES, the term "hazards" is replaced with "problematic areas," reflecting the specific nature of the educational environment. Special attention is paid to determining CCPs—stages where corrective measures can most effectively prevent or minimize the risk of failing educational goals. The ACPES principles align with HACCP's principles, adjusted for the specificities of CPE. The method of the learner's journey map (variation of Customer Journey Map, CJM) can be used to overcome the complexity of formalizing the production chain in educational services. CJM provides a comprehensive understanding of the learner's experience at each stage, facilitating targeted and effective quality management. Thus, integrating the learner's journey map into ACPES represents a significant extension of the methodology's capabilities, ensuring a comprehensive understanding of the educational process and forming an effective quality management system focused on meeting learners' needs and expectations.Keywords: quality management, continuing professional education, customer journey map, HACCP
Procedia PDF Downloads 3766 Integrating Animal Nutrition into Veterinary Science: Enhancing Health, Productivity, and Sustainability through Advanced Nutritional Strategies and Collaborative Approaches
Authors: Namiiro Shirat Umar
Abstract:
The science of animals and veterinary medicine is a multidisciplinary field dedicated to understanding, managing, and enhancing the health and welfare of animals. This field encompasses a broad spectrum of disciplines, including animal physiology, genetics, nutrition, behavior, and pathology, as well as preventive and therapeutic veterinary care. Veterinary science focuses on diagnosing, treating, and preventing diseases in animals, ensuring their health and well-being. It involves the study of various animal species, from companion animals and livestock to wildlife and exotic species. Through advanced diagnostic techniques, medical treatments, and surgical procedures, veterinarians address a wide range of health issues, from infectious diseases and injuries to chronic conditions and reproductive health. Animal science complements veterinary medicine by providing a deeper understanding of animal biology and behavior, which is essential for effective health management. It includes research on animal breeding, nutrition, and husbandry practices aimed at improving animal productivity and welfare. Incorporating modern technologies and methodologies, such as genomics, bioinformatics, and precision farming, the science of animals and veterinary medicine continually evolves to address emerging challenges. This integrated approach ensures the development of sustainable practices, enhances animal welfare and contributes to public health by monitoring zoonotic diseases and ensuring the safety of animal products. Animal nutrition is a cornerstone of animal and veterinary science, focusing on the dietary needs of animals to promote health, growth, reproduction, and overall well-being. Proper nutrition ensures that animals receive essential nutrients, including macronutrients (carbohydrates, proteins, fats) and micronutrients (vitamins, minerals), tailored to their specific species, life stages, and physiological conditions. By emphasizing a balanced diet, animal nutrition serves as a preventive measure against diseases and enhances recovery from illnesses, reducing the need for pharmaceutical interventions. It addresses key health issues such as metabolic disorders, reproductive inefficiencies, and immune system deficiencies. Moreover, optimized nutrition improves the quality of animal products like meat, milk, and eggs and enhances the sustainability of animal farming by improving feed efficiency and reducing environmental waste. The integration of animal nutrition into veterinary practice necessitates a collaborative approach involving veterinarians, animal nutritionists, and farmers. Advances in nutritional science, such as precision feeding and the use of nutraceuticals, provide innovative solutions to traditional veterinary challenges. Overall, the focus on animal nutrition as a primary aspect of veterinary care leads to more holistic, sustainable, and effective animal health management practices, promoting the welfare and productivity of animals in various settings. This abstract is a trifold in nature as it traverses how education can put more emphasis on animal nutrition as an alternative for improving animal health as an important issue espoused under the discipline of animal and veterinary science; therefore, brief aspects of this paper and they are as follows; animal nutrition, veterinary science and animals.Keywords: animal nutrition as a way to enhance growth, animal science as a study, veterinary science dealing with health of the animals, animals healthcare dealing with proper sanitation
Procedia PDF Downloads 3365 Mean Nutrient Intake and Nutrient Adequacy Ratio in India: Occurrence of Hidden Hunger in Indians
Authors: Abha Gupta, Deepak K. Mishra
Abstract:
The focus of food security studies in India has been on the adequacy of calories and its linkage with poverty level. India currently being undergoing a massive demographic and epidemiological transition has demonstrated a decline in average physical activity with improved mechanization and urbanization. Food consumption pattern is also changing with decreasing intake of coarse cereals and a marginal increase in the consumption of fruits, vegetables and meat products resulting into a nutrition transition in the country. However, deficiency of essential micronutrients such as vitamins and minerals is rampant despite their growing importance in fighting back with lifestyle and other modern diseases. The calorie driven studies can hardly tackle the complex problem of malnutrition. This paper fills these research lacuna and analyses mean intake of different major and micro-nutrients among different socio-economic groups and adequacy of these nutrients from recommended dietary allowance. For the purpose, a cross-sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of the state of Uttar Pradesh, India. Data on quantity consumed of 74 food items grouped into 10 food categories with a recall period of seven days was collected from the households and converted into energy, protein, fat, carbohydrate, calcium, iron, thiamine, riboflavin, niacin and vitamin C using standard guidelines of National Institute of Nutrition. These converted nutrients were compared with recommended norms given by National Nutrition Monitoring Bureau. Per capita nutrient adequacy was calculated by dividing mean nutrient intake by the household size and then by comparing it with recommended norm. Findings demonstrate that source of both macro and micro-nutrients are mainly cereals followed by milk, edible oil and sugar items. Share of meat in providing essential nutrients is very low due to vegetarian diet. Vegetables, pulses, nuts, fruits and dry fruits are a poor source for most of the nutrients. Further analysis evinces that intake of most of the nutrients is higher than the recommended norm. Riboflavin is the only vitamin whose intake is less than the standard norm. Poor group, labour, small farmers, Muslims, scheduled caste demonstrate comparatively lower intake of all nutrients than their counterpart groups, though, they get enough macro and micro-nutrients significantly higher than the norm. One of the major reasons for higher intake of most of the nutrients across all socio-economic groups is higher consumption of monotonous diet based on cereals and milk. Most of the nutrients get their major share from cereals particularly wheat and milk intake. It can be concluded from the analysis that although there is adequate intake of most of the nutrients in the diet of rural population yet their source is mainly cereals and milk products depicting a monotonous diet. Hence, more efforts are needed to diversify the diet by giving more focus to the production of other food items particularly fruits, vegetables and pulse products. Awareness among the population, more accessibility and incorporating food items other than cereals in government social safety programmes are other measures to improve food security in India.Keywords: hidden hunger, India, nutrients, recommended norm
Procedia PDF Downloads 31764 Mapping Context, Roles, and Relations for Adjudicating Robot Ethics
Authors: Adam J. Bowen
Abstract:
Abstract— Should robots have rights or legal protections. Often debates concerning whether robots and AI should be afforded rights focus on conditions of personhood and the possibility of future advanced forms of AI satisfying particular intrinsic cognitive and moral attributes of rights-holding persons. Such discussions raise compelling questions about machine consciousness, autonomy, and value alignment with human interests. Although these are important theoretical concerns, especially from a future design perspective, they provide limited guidance for addressing the moral and legal standing of current and near-term AI that operate well below the cognitive and moral agency of human persons. Robots and AI are already being pressed into service in a wide range of roles, especially in healthcare and biomedical contexts. The design and large-scale implementation of robots in the context of core societal institutions like healthcare systems continues to rapidly develop. For example, we bring them into our homes, hospitals, and other care facilities to assist in care for the sick, disabled, elderly, children, or otherwise vulnerable persons. We enlist surgical robotic systems in precision tasks, albeit still human-in-the-loop technology controlled by surgeons. We also entrust them with social roles involving companionship and even assisting in intimate caregiving tasks (e.g., bathing, feeding, turning, medicine administration, monitoring, transporting). There have been advances to enable severely disabled persons to use robots to feed themselves or pilot robot avatars to work in service industries. As the applications for near-term AI increase and the roles of robots in restructuring our biomedical practices expand, we face pressing questions about the normative implications of human-robot interactions and collaborations in our collective worldmaking, as well as the moral and legal status of robots. This paper argues that robots operating in public and private spaces be afforded some protections as either moral patients or legal agents to establish prohibitions on robot abuse, misuse, and mistreatment. We already implement robots and embed them in our practices and institutions, which generates a host of human-to-machine and machine-to-machine relationships. As we interact with machines, whether in service contexts, medical assistance, or home health companions, these robots are first encountered in relationship to us and our respective roles in the encounter (e.g., surgeon, physical or occupational therapist, recipient of care, patient’s family, healthcare professional, stakeholder). This proposal aims to outline a framework for establishing limiting factors and determining the extent of moral or legal protections for robots. In doing so, it advocates for a relational approach that emphasizes the priority of mapping the complex contextually sensitive roles played and the relations in which humans and robots stand to guide policy determinations by relevant institutions and authorities. The relational approach must also be technically informed by the intended uses of the biomedical technologies in question, Design History Files, extensive risk assessments and hazard analyses, as well as use case social impact assessments.Keywords: biomedical robots, robot ethics, robot laws, human-robot interaction
Procedia PDF Downloads 12263 Education Management and Planning with Manual Based
Authors: Purna Bahadur Lamichhane
Abstract:
Education planning and management are foundational pillars for developing effective educational systems. However, in many educational contexts, especially in developing nations, technology-enabled management is still emerging. In such settings, manual-based systems, where instructions and guidelines are physically documented, remain central to educational planning and management. This paper examines the effectiveness, challenges, and potential of manual-based education planning systems in fostering structured, reliable, and adaptable management frameworks. The objective of this study is to explore how a manual-based approach can successfully guide administrators, educators, and policymakers in delivering high-quality education. By using structured, accessible instructions, this approach serves as a blueprint for educational governance, offering clear, actionable steps to achieve institutional goals. Through an analysis of case studies from various regions, the paper identifies key strategies for planning school schedules, managing resources, and monitoring academic and administrative performance without relying on automated systems. The findings underscore the significance of organized documentation, standard operating procedures, and comprehensive manuals that establish uniformity and maintain educational standards across institutions. With a manual-based approach, management can remain flexible, responsive, and user-friendly, especially in environments where internet access and digital literacy are limited. Moreover, it allows for localization, where instructions can be tailored to the unique cultural and socio-economic contexts of the community, thereby increasing relevancy and ownership among local stakeholders. This paper also highlights several challenges associated with manual-based education management. Manual systems often require significant time and human resources for maintenance and updating, potentially leading to inefficiencies and inconsistencies over time. Furthermore, manual records can be susceptible to loss, damage, and limited accessibility, which may affect decision-making and institutional memory. There is also the risk of siloed information, where crucial data resides with specific individuals rather than being accessible across the organization. However, with proper training and regular oversight, many of these limitations can be mitigated. The study further explores the potential for hybrid approaches, combining manual planning with selected digital tools for record-keeping, reporting, and analytics. This transitional strategy can enable schools and educational institutions to gradually embrace digital solutions without discarding the familiarity and reliability of manual instructions. In conclusion, this paper advocates for a balanced, context-sensitive approach to education planning and management. While digital systems hold the potential to streamline processes, manual-based systems offer resilience, inclusivity, and adaptability for institutions where technology adoption may be constrained. Ultimately, by reinforcing the importance of structured, detailed manuals and instructional guides, educational institutions can build robust management frameworks that facilitate both short-term successes and long-term growth in their educational mission. This research aims to provide a reference for policymakers, educators, and administrators seeking practical, low-cost, and adaptable solutions for sustainable educational planning and management.Keywords: educatoin, planning, management, manual
Procedia PDF Downloads 1762 Effectiveness of Gamified Simulators in the Health Sector
Authors: Nuno Biga
Abstract:
The integration of serious games with gamification in management education and training has gained significant importance in recent years as innovative strategies are sought to improve target audience engagement and learning outcomes. This research builds on the author's previous work in this field and presents a case study that evaluates the ex-post impact of a sample of applications of the BIGAMES management simulator in the training of top managers from various hospital institutions. The methodology includes evaluating the reaction of participants after each edition of BIGAMES Accident & Emergency (A&E) carried out over the last 3 years, as well as monitoring the career path of a significant sample of participants and their feedback more than a year after their experience with this simulator. Control groups will be set up, according to the type of role their members held when they took part in the BIGAMES A&E simulator: Administrators, Clinical Directors and Nursing Directors. Former participants are invited to answer a questionnaire structured for this purpose, where they are asked, among other questions, about the importance and impact that the BIGAMES A&E simulator has had on their professional activity. The research methodology also includes an exhaustive literature review, focusing on empirical studies in the field of education and training in management and business that investigate the effectiveness of gamification and serious games in improving learning, team collaboration, critical thinking, problem-solving skills and overall performance, with a focus on training contexts in the health sector. The results of the research carried out show that gamification and serious games that simulate real scenarios, such as Business Interactive Games - BIGAMES©, can significantly increase the motivation and commitment of participants, stimulating the development of transversal skills, the mobilization of group synergies and the acquisition and retention of knowledge through interactive user-centred scenarios. Individuals who participate in game-based learning series show a higher level of commitment to learning because they find these teaching methods more enjoyable and interactive. This research study aims to demonstrate that, as executive education and training programs develop to meet the current needs of managers, gamification and serious games stand out as effective means of bridging the gap between traditional teaching methods and modern educational and training requirements. To this end, this research evaluates the medium/long-term effects of gamified learning on the professional performance of participants in the BIGAMES simulator applied to healthcare. Based on the conclusions of the evaluation of the effectiveness of training using gamification and taking into account the results of the opinion poll of former A&E participants, this research study proposes an integrated approach for the transversal application of the A&E Serious Game in various educational contexts, covering top management (traditionally the target audience of BIGAMES A&E), middle and operational management in healthcare institutions (functional area heads and professionals with career development potential), as well as higher education in medicine and nursing courses. The integrated solution called “BIGAMES A&E plus”, developed as part of this research, includes the digitalization of key processes and the incorporation of AI.Keywords: artificial intelligence (AI), executive training, gamification, higher education, management simulators, serious games (SG), training effectiveness
Procedia PDF Downloads 1561 Structural Monitoring of Externally Confined RC Columns with Inadequate Lap-Splices, Using Fibre-Bragg-Grating Sensors
Authors: Petros M. Chronopoulos, Evangelos Z. Astreinidis
Abstract:
A major issue of the structural assessment and rehabilitation of existing RC structures is the inadequate lap-splicing of the longitudinal reinforcement. Although prohibited by modern Design Codes, the practice of arranging lap-splices inside the critical regions of RC elements was commonly applied in the past. Today this practice is still the rule, at least for conventional new buildings. Therefore, a lot of relevant research is ongoing in many earthquake prone countries. The rehabilitation of deficient lap-splices of RC elements by means of external confinement is widely accepted as the most efficient technique. If correctly applied, this versatile technique offers a limited increase of flexural capacity and a considerable increase of local ductility and of axial and shear capacities. Moreover, this intervention does not affect the stiffness of the elements and does not affect the dynamic characteristics of the structure. This technique has been extensively discussed and researched contributing to vast accumulation of technical and scientific knowledge that has been reported in relevant books, reports and papers, and included in recent Design Codes and Guides. These references are mostly dealing with modeling and redesign, covering both the enhanced (axial and) shear capacity (due to the additional external closed hoops or jackets) and the increased ductility (due to the confining action, preventing the unzipping of lap-splices and the buckling of continuous reinforcement). An analytical and experimental program devoted to RC members with lap-splices is completed in the Lab. of RC/NTU of Athens/GR. This program aims at the proposal of a rational and safe theoretical model and the calibration of the relevant Design Codes’ provisions. Tests, on forty two (42) full scale specimens, covering mostly beams and columns (not walls), strengthened or not, with adequate or inadequate lap-splices, have been already performed and evaluated. In this paper, the results of twelve (12) specimens under fully reversed cyclic actions are presented and discussed. In eight (8) specimens the lap-splices were inadequate (splicing length of 20 or 30 bar diameters) and they were retrofitted before testing by means of additional external confinement. The two (2) most commonly applied confining materials were used in this study, namely steel and FRPs. More specifically, jackets made of CFRP wraps or light cages made of mild steel were applied. The main parameters of these tests were (i) the degree of confinement (internal and external), and (ii) the length of lap-splices, equal to 20, 30 or 45 bar diameters. These tests were thoroughly instrumented and monitored, by means of conventional (LVDTs, strain gages, etc.) and innovative (optic fibre-Bragg-grating) sensors. This allowed for a thorough investigation of the most influencing design parameter, namely the hoop-stress developed in the confining material. Based on these test results and on comparisons with the provisions of modern Design Codes, it could be argued that shorter (than the normative) lap-splices, commonly found in old structures, could still be effective and safe (at least for lengths more than an absolute minimum), depending on the required ductility, if a properly arranged and adequately detailed external confinement is applied.Keywords: concrete, fibre-Bragg-grating sensors, lap-splices, retrofitting / rehabilitation
Procedia PDF Downloads 25060 Anesthesia for Spinal Stabilization Using Neuromuscular Blocking Agents in Dog: Case Report
Authors: Agata Migdalska, Joanna Berczynska, Ewa Bieniek, Jacek Sterna
Abstract:
Muscle relaxation is considered important during general anesthesia for spine stabilization. In a presented case peripherally acting muscle relaxant was applied during general anesthesia for spine stabilization surgery. The patient was a dog, 11-years old, 26 kg, male, mix breed. Spine fracture was situated between Th13-L1-L2, probably due to the car accident. Preanesthetic physical examination revealed no sign underlying health issues. The dog was premedicated with midazolam 0.2 mg IM and butorphanol 2.4 mg IM. General anesthesia was induced with propofol IV. After the induction, the dog was intubated with an endotracheal tube and connected to an open-ended rebreathing system and maintained with the use of inhalation anesthesia with isoflurane in oxygen. 0,5 mg/ kg of rocuronium was given IV. Use of muscle relaxant was accompanied by an assessment of the degree of neuromuscular blockade by peripheral nerve stimulator. Electrodes were attached to the skin overlying at the peroneal nerve at the lateral cranial tibia. Four electrical pulses were applied to the nerve over a 2 second period. When satisfying nerve block was detected dog was prepared for the surgery. No further monitoring of the effectiveness of blockade was performed during surgery. Mechanical ventilation was kept during anesthesia. During surgery dog maintain stable, and no anesthesiological complication occur. Intraoperatively surgeon claimed that neuromuscular blockade results in a better approach to the spine and easier muscle manipulation which was helpful in order to see the fracture and replace bone fragments. Finally, euthanasia was performed intraoperatively as a result of vast myelomalacia process of the spinal cord. This prevented examination of the recovering process. Neuromuscular blocking agents act at the neuromuscular junction to provide profound muscle relaxation throughout the body. Muscle blocking agents are neither anesthetic nor analgesic; therefore inappropriately used may cause paralysis in fully conscious and feeling pain patient. They cause paralysis of all skeletal muscles, also diaphragm and intercostal muscles when given in higher doses. Intraoperative management includes maintaining stable physiological conditions, which involves adjusting hemodynamic parameters, ensuring proper ventilation, avoiding variations in temperature, maintain normal blood flow to promote proper oxygen exchange. Neuromuscular blocking agent can cause many side effects like residual paralysis, anaphylactic or anaphylactoid reactions, delayed recovery from anesthesia, histamine release, recurarization. Therefore reverse drug like neostigmine (with glikopyrolat) or edrofonium (with atropine) should be used in case of a life-threatening situation. Another useful drug is sugammadex, although the cost of this drug strongly limits its use. Muscle relaxant improves surgical conditions during spinal surgery, especially in heavily muscled individuals. They are also used to facilitate the replacement of dislocated joints as they improve conditions during fracture reduction. It is important to emphasize that in a patient with muscle weakness neuromuscular blocking agents may result in intraoperative and early postoperative cardiovascular and respiratory complications, as well as prolonged recovery from anesthesia. This should not appear in patients with recent spine fracture or luxation. Therefore it is believed that neuromuscular blockers could be useful during spine stabilization procedures.Keywords: anesthesia, dog, neuromuscular block, spine surgery
Procedia PDF Downloads 18159 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction
Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman
Abstract:
Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation
Procedia PDF Downloads 9258 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6157 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates
Authors: Pedro Llanos, Diego García
Abstract:
Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts
Procedia PDF Downloads 13056 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment
Authors: Rouzbeh Jafari, Joe Nava
Abstract:
This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy
Procedia PDF Downloads 11055 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories
Authors: Berna Çalışkan
Abstract:
The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.Keywords: water resources management, hydro tool, water protection, transportation
Procedia PDF Downloads 5854 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection
Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément
Abstract:
The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars
Procedia PDF Downloads 11753 Efficacy and Safety of Sublingual Sufentanil for the Management of Acute Pain
Authors: Neil Singla, Derek Muse, Karen DiDonato, Pamela Palmer
Abstract:
Introduction: Pain is the most common reason people visit emergency rooms. Studies indicate however, that Emergency Department (ED) physicians often do not provide adequate analgesia to their patients as a result of gender and age bias, opiophobia and insufficient knowledge of and formal training in acute pain management. Novel classes of analgesics have recently been introduced, but many patients suffer from acute pain in settings where the availability of intravenous (IV) access may be limited, so there remains a clinical need for rapid-acting, potent analgesics that do not require an invasive route of delivery. A sublingual sufentanil tablet (SST), dispensed using a single-dose applicator, is in development for treatment of moderate-to-severe acute pain in a medically-supervised setting. Objective: The primary objective of this study was to demonstrate the repeat-dose efficacy, safety and tolerability of sufentanil 20 mcg and 30 mcg sublingual tablets compared to placebo for the management of acute pain as determined by the time-weighted sum of pain intensity differences (SPID) to baseline over the 12-hour study period (SPID12). Key secondary efficacy variables included SPID over the first hour (SPID1), Total pain relief over the 12-hour study period (TOTPAR12), time to perceived pain relief (PR) and time to meaningful PR. Safety variables consisted of adverse events (AE), vital signs, oxygen saturation and early termination. Methods: In this Phase 2, double-blind, dose-finding study, an equal number of male and female patients were randomly assigned in a 2:2:1 ratio to SST 20 mcg, SS 30 mcg or placebo, respectively, following bunionectomy. Study drug was dosed as needed, but not more frequently than hourly. Rescue medication was available as needed. The primary endpoint was the Summed Pain Intensity Difference to baseline over 12h (SPIDI2). Safety was assessed by continuous oxygen saturation monitoring and adverse event reporting. Results: 101 patients (51 Male/50 Female) were randomized, 100 received study treatment (intent-to-treat [ITT] population), and 91 completed the study. Reasons for early discontinuation were lack of efficacy (6), adverse events (2) and drug-dosing error (1). Mean age was 42.5 years. For the ITT population, SST 30 mcg was superior to placebo (p=0.003) for the SPID12. SPID12 scores in the active groups were superior for both male (ANOVA overall p-value =0.038) and female (ANOVA overall p-value=0.005) patients. Statistically significant differences in favour of sublingual sufentanil were also observed between the SST 30mcg and placebo group for SPID1(p<0.001), TOTPAR12(p=0.002), time to perceived PR (p=0.023) and time to meaningful PR (p=0.010). Nausea, vomiting and somnolence were more frequent in the sufentanil groups but there were no significant differences between treatment arms for the proportion of patients who prematurely terminated due to AE or inadequate analgesia. Conclusions: Sufentanil tablets dispensed sublingually using a single-dose applicator is in development for treatment of patients with moderate-to-severe acute pain in a medically-supervised setting where immediate IV access is limited. When administered sublingually, sufentanil’s pharmacokinetic profile and non-invasive delivery makes it a useful alternative to IM or IV dosing.Keywords: acute pain, pain management, sublingual, sufentanil
Procedia PDF Downloads 35652 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic
Abstract:
The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.Keywords: cosmic ray, human dose, solar flare, aviation
Procedia PDF Downloads 206