Search results for: 3D object detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4359

Search results for: 3D object detection

369 Modeling Thermal Changes of Urban Blocks in Relation to the Landscape Structure and Configuration in Guilan Province

Authors: Roshanak Afrakhteh, Abdolrasoul Salman Mahini, Mahdi Motagh, Hamidreza Kamyab

Abstract:

Urban Heat Islands (UHIs) are distinctive urban areas characterized by densely populated central cores surrounded by less densely populated peripheral lands. These areas experience elevated temperatures, primarily due to impermeable surfaces and specific land use patterns. The consequences of these temperature variations are far-reaching, impacting the environment and society negatively, leading to increased energy consumption, air pollution, and public health concerns. This paper emphasizes the need for simplified approaches to comprehend UHI temperature dynamics and explains how urban development patterns contribute to land surface temperature variation. To illustrate this relationship, the study focuses on the Guilan Plain, utilizing techniques like principal component analysis and generalized additive models. The research centered on mapping land use and land surface temperature in the low-lying area of Guilan province. Satellite data from Landsat sensors for three different time periods (2002, 2012, and 2021) were employed. Using eCognition software, a spatial unit known as a "city block" was utilized through object-based analysis. The study also applied the normalized difference vegetation index (NDVI) method to estimate land surface radiance. Predictive variables for urban land surface temperature within residential city blocks were identified categorized as intrinsic (related to the block's structure) and neighboring (related to adjacent blocks) variables. Principal Component Analysis (PCA) was used to select significant variables, and a Generalized Additive Model (GAM) approach, implemented using R's mgcv package, modeled the relationship between urban land surface temperature and predictor variables.Notable findings included variations in urban temperature across different years attributed to environmental and climatic factors. Block size, shared boundary, mother polygon area, and perimeter-to-area ratio were identified as main variables for the generalized additive regression model. This model showed non-linear relationships, with block size, shared boundary, and mother polygon area positively correlated with temperature, while the perimeter-to-area ratio displayed a negative trend. The discussion highlights the challenges of predicting urban surface temperature and the significance of block size in determining urban temperature patterns. It also underscores the importance of spatial configuration and unit structure in shaping urban temperature patterns. In conclusion, this study contributes to the growing body of research on the connection between land use patterns and urban surface temperature. Block size, along with block dispersion and aggregation, emerged as key factors influencing urban surface temperature in residential areas. The proposed methodology enhances our understanding of parameter significance in shaping urban temperature patterns across various regions, particularly in Iran.

Keywords: urban heat island, land surface temperature, LST modeling, GAM, Gilan province

Procedia PDF Downloads 49
368 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 115
367 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: affective computing, emotion recognition, humanoid robot, human-robot-interaction (HRI), social robots

Procedia PDF Downloads 209
366 Chronic Aflatoxin Exposure During Pregnancy Is Associated With Lower Fetal Growth Trajectories: A Prospective Cohort Study in Rural Ethiopia

Authors: K. Tesfamariam, S. Gebreyesus, C. Lachat, P. Kolsteren, S. De Saeger, M. De Boevre, A. Argaw

Abstract:

Aflatoxins are toxic secondary metabolites produced by Aspergillus fungi, which are ubiquitously present in the food supplies of low- and middle-income countries. Studies of maternal aflatoxin exposure and fetal outcomes are mainly focused on size at birth and the effect on intrauterine fetal growth has not been assessed using repeated longitudinal fetal biometry across gestation. Therefore, this study intends to assess the association between chronic aflatoxin exposure during pregnancy and fetal growth trajectories in a rural Ethiopian setting. In a prospective cohort study, we enrolled 492 pregnant women. A phlebotomist collected 5 mL of a venous blood sample from eligible women before 28 completed weeks of gestation and aflatoxin B1-lysine concentration was determined using liquid chromatography-tandem mass spectrometry. The mean (±SD) gestational age was 19.1 (3.71) weeks at enrollment, and 28.5 (3.51) and 34.5 (2.44) weeks of gestation at the second and third rounds of ultrasound measurements, respectively. Estimated fetal weight was expressed in centiles using the INTERGROWTH-21st reference. We fitted a multivariable linear mixed-effects model to estimate the rate of fetal growth between aflatoxin-exposed (i.e., aflatoxin B1-lysine concentration above or equal to the limit of detection) and non-exposed mothers in the study. Mothers had a mean (±SD) age of 26.0 (4.58) years. The median (P25, P75) serum AFB1-lysine concentration was 12.6 (0.93, 96.9) pg/mg albumin, and aflatoxin exposure was observed in 86.6% of maternal blood samples. Eighty-five percent of the women enrolled provided at least two ultrasound measurements for analysis. On average, the aflatoxin-exposed group had a significantly lower change over time in fetal weight-for-gestational age centile than the unexposed group (ß = -1.01 centiles/week, 95% CI: -1.87, -0.15, p = 0.02). Chronic maternal AF exposure is associated with lower fetal weight gain over time. Our findings emphasize the importance of nutrition-sensitive strategies to mitigate dietary aflatoxin exposure as well as adopting food safety measures in low-income settings, particularly during the fetal period of development.

Keywords: aflatoxin, fetal growth, low-income setting, mycotoxins

Procedia PDF Downloads 112
365 Complete Chloroplast DNA Sequences of Georgian Endemic Polyploid Wheats

Authors: M. Gogniashvili, I. Maisaia, A. Kotorashvili, N. Kotaria, T. Beridze

Abstract:

Three types of plasmon (A, B and G) is typical for genus Triticum. In polyploid species - Triticum turgidum L. and Triticum aestivum L. plasmon B is detected. In the forthcoming paper, complete nucleotide sequence of chloroplast DNA of 11 representatives of Georgian wheat polyploid species, carrying plasmon B was determined. Sequencing of chloroplast DNA was performed on an Illumina MiSeq platform. Chloroplast DNA molecules were assembled using the SOAPdenovo computer program. All contigs were aligned to the reference chloroplast genome sequence using BLASTN. For detection of SNPs and Indels and phylogeny tree construction computer programs Mafft and Blast were used. Using Triticum aestivum L. subsp. macha (Dekapr. & Menabde) Mackey var. paleocolchicum Dekapr. et Menabde as a reference, 5 SNPs can be identified in chloroplast DNA of Georgian endemic polyploid wheat. The number of noncoding substitutions is 2, coding substitutions - 3. In comparison with reference DNA two - 38 bp and 56 bp inversions were observed in paleocolchicum subspecies. There were six 1 bp indels detected in Georgian polyploid wheats, all of them at microsatellite stretches. The phylogeny tree shows that subspecies macha, carthlicum and paleocolchicum occupy different positions. According to the simplified scheme based on SNP and indel data, the ancestral, female parent of the all studied polyploid wheat is unknown X predecesor, from which four lines were formed. 1 SNP and two inversions (38 bp and 56 bp) caused the formation of subsp. paleocolchicum. Three other lines are macha, durum and carthlicum lines. Macha line is further divided into two sublines (M_1 and M_4). Carthlicum line includes subsp.carthlicum and T.aestivum - C_1 - C_2 - A_1. One of the central question of wheat domestication is which people(s) participated in wheat domestication? It is proposed that the predecessors of Georgian peoples (Proto-Kartvelians) must be placed, on the evidence of archaic lexical and toponymic data, in the mountainous regions of the western and central part of the Little Caucasus (the Transcaucasian foothills) at least 4,000 years ago. One of the possibility to explain the ‘wheat puzzle’ is that Kartvelian speakers brought domesticated wheat species and subspecis from Fertile Crescent further north to South Caucasus.

Keywords: chloroplast DNA, sequencing, SNP, triticum

Procedia PDF Downloads 131
364 Cooling With Phase-Change-Material in Vietnam: Outcomes at 18 Months

Authors: Hang T. T. Tran, Ha T. Le, Hanh T. P. Tran, Hung V. Cao, Giang T. H. Nguyen, Dien M. Tran, Tobias Alfvén, Linus Olson

Abstract:

Background: Hypoxic Ischemic Encephalopathy is one of the major causes of neonatal death and those who survive with severe encephalopathy are more likely to develop adverse long-term outcomes such as neurocognitive impairment and cerebral palsy, which is a huge burden, especially in low-middle income countries. It is important to have a long-term follow-up for early detection and promote early intervention for these groups of high-risk infants. Aim: To determine the neurological outcome of cooling infants at 18 months and identify an optimized neurological examination scale for Hypoxic Ischemic Encephalopathy infants in Vietnam. Method: Descriptive study of neurodevelopmental outcomes at 18 months of HIE infants who underwent therapeutic hypothermia treatment in Vietnam. All survived cooling infants were assessed at discharge and at 6, 12, and 18 months by a pediatric physical therapist and a neurologist using two assessment tools: Ages and Stages Questionnaires and the Hammersmith Infant Neurological Examination scale to detect impairments and promote early intervention for those who require it. Results: During a 3-year period, a total of 130 neonates with moderate to severe HIE underwent therapeutic hypothermia treatment using Phase change material mattress (65% moderate, 35% severe – Sarnat). 43 (33%) died during hospitalization and infancy; among survivors, 69 (79%) completed 3 follow-ups at 18 months. At 18 months, 25 had cerebral palsy, 11 had mild delayed neurodevelopment. At each time-point, infants with a normal/mildly delayed neurodevelopment had significantly higher Ages and Stages Questionnaires and Hammersmith Infant Neurological Examination scores (p<0.05) than those with cerebral palsy. Conclusion: The study showed that the Ages and Stages Questionnaires and Hammersmith Infant Neurological Examination is a helpful tool in the process of early diagnosis of infants at low and high neurological risk and identifying those infants needing specific rehabilitation programme.

Keywords: encephalopathy, phase-change-material, neurodevelopment, cerebral palsy

Procedia PDF Downloads 117
363 Crossing of the Intestinal Barrier Thanks to Targeted Biologics: Nanofitins

Authors: Solene Masloh, Anne Chevrel, Maxime Culot, Leonardo Scapozza, Magali Zeisser-Labouebe

Abstract:

The limited stability of clinically proven therapeutic antibodies limits their administration by the parenteral route. However, oral administration remains the best alternative as it is the most convenient and less invasive one. Obtaining a targeted treatment based on biologics, which can be orally administered, would, therefore, be an ideal situation to improve patient adherence and compliance. Nevertheless, the delivery of macromolecules through the intestine remains challenging because of their sensitivity to the harsh conditions of the gastrointestinal tract and their low permeability across the intestinal mucosa. To address this challenge, this project aims to demonstrate that targeting receptor-mediated endocytosis followed by transcytosis could maximize the intestinal uptake and transport of large molecules, such as Nanofitins. These affinity proteins of 7 kDa with binding properties similar to antibodies have already demonstrated retained stability in the digestive tract and local efficiency. However, their size does not allow passive diffusion through the intestinal barrier. Nanofitins having a controlled affinity for membrane receptors involved in the transcytosis mechanism used naturally for the transport of large molecules in humans were generated. Proteins were expressed using ribosome display and selected based on affinity to the targeted receptor and other characteristics. Their uptake and transport ex vivo across viable porcine intestines were investigated using an Ussing chambers system. In this paper, we will report the results achieved while addressing the different challenges linked to this study. To validate the ex vivo model, first, we proved the presence of the receptors targeted in humans on the porcine intestine. Then, after the identification of an optimal way of detection of Nanofitins, transport experiments were performed on porcine intestines with viability followed during the time of the experiment. The results, showing that the physiological process of transcytosis is capable of being triggered by the binding of Nanofitins on their target, will be reported here. In conclusion, the results show that Nanofitins can be transported across the intestinal barrier by triggering the receptor-mediated transcytosis and that the ex vivo model is an interesting technique to assess biologics absorption through the intestine.

Keywords: ex-vivo, Nanofitins, oral administration, transcytosis

Procedia PDF Downloads 160
362 Determination of Pesticides Residues in Tissue of Two Freshwater Fish Species by Modified QuEChERS Method

Authors: Iwona Cieślik, Władysław Migdał, Kinga Topolska, Ewa Cieślik

Abstract:

The consumption of fish is recommended as a means of preventing serious diseases, especially cardiovascular problems. Fish is known to be a valuable source of protein (rich in essential amino acids), unsaturated fatty acids, fat-soluble vitamins, macro- and microelements. However, it can also contain several contaminants (e.g. pesticides, heavy metals) that may pose considerable risks for humans. Among others, pesticide are of special concern. Their widespread use has resulted in the contamination of environmental compartments, including water. The occurrence of pesticides in the environment is a serious problem, due to their potential toxicity. Therefore, a systematic monitoring is needed. The aim of the study was to determine the organochlorine and organophosphate pesticide residues in fish muscle tissues of the pike (Esox lucius, L.) and the rainbow trout (Oncorhynchus mykkis, Walbaum) by a modified QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method, using Gas Chromatography Quadrupole Mass Spectrometry (GC/Q-MS), working in selected-ion monitoring (SIM) mode. The analysis of α-HCH, β-HCH, lindane, diazinon, disulfoton, δ-HCH, methyl parathion, heptachlor, malathion, aldrin, parathion, heptachlor epoxide, γ-chlordane, endosulfan, α-chlordane, o,p'-DDE, dieldrin, endrin, 4,4'-DDD, ethion, endrin aldehyde, endosulfan sulfate, 4,4'-DDT, and metoxychlor was performed in the samples collected in the Carp Valley (Malopolska region, Poland). The age of the pike (n=6) was 3 years and its weight was 2-3 kg, while the age of the rainbow trout (n=6) was 0.5 year and its weight was 0.5-1.0 kg. Detectable pesticide (HCH isomers, endosulfan isomers, DDT and its metabolites as well as metoxychlor) residues were present in fish samples. However, all these compounds were below the limit of quantification (LOQ). The other examined pesticide residues were below the limit of detection (LOD). Therefore, the levels of contamination were - in all cases - below the default Maximum Residue Levels (MRLs), established by Regulation (EC) No 396/2005 of the European Parliament and of the Council. The monitoring of pesticide residues content in fish is required to minimize potential adverse effects on the environment and human exposure to these contaminants.

Keywords: contaminants, fish, pesticides residues, QuEChERS method

Procedia PDF Downloads 193
361 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions

Authors: Mahalakshmi Vivekanandan S.

Abstract:

As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.

Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling

Procedia PDF Downloads 113
360 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients

Authors: Leander Van Neste, Kirk Wojno

Abstract:

The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.

Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response

Procedia PDF Downloads 137
359 Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of In- Hospital Cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditions and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90mmHg, respiratory rate <8 or > 28 breaths per minute, O2 saturation < 90%, acute change in conscious state, acute chest pain or worried about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in In-Hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in In-Hospital cardiac arrest and overall hospital mortality rate. We conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed Chi -square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010 ,139 calls in 2011 and 245 calls in 2012.The number of ERT calls per 1000 admissions in year 2009-10 was 7.69, 5.61 in 2011 and 9.38 in 2013. The number of Code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, But we have not found difference in overall hospital mortality rate.

Keywords: emergency response team, ERT, cardiac arrest, emergency medicine

Procedia PDF Downloads 287
358 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran

Authors: S. Mani, M. Kafil, E. Asadi

Abstract:

Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.

Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality

Procedia PDF Downloads 203
357 Evaluation of Different Liquid Scintillation Counting Methods for 222Rn Determination in Waters

Authors: Jovana Nikolov, Natasa Todorovic, Ivana Stojkovic

Abstract:

Monitoring of 222Rn in drinking or surface waters, as well as in groundwater has been performed in connection with geological, hydrogeological and hydrological surveys and health hazard studies. Liquid scintillation counting (LSC) is often preferred analytical method for 222Rn measurements in waters because it allows multiple-sample automatic analysis. LSC method implies mixing of water samples with organic scintillation cocktail, which triggers radon diffusion from the aqueous into organic phase for which it has a much greater affinity, eliminating possibility of radon emanation in that manner. Two direct LSC methods that assume different sample composition have been presented, optimized and evaluated in this study. One-phase method assumed direct mixing of 10 ml sample with 10 ml of emulsifying cocktail (Ultima Gold AB scintillation cocktail is used). Two-phase method involved usage of water-immiscible cocktails (in this study High Efficiency Mineral Oil Scintillator, Opti-Fluor O and Ultima Gold F are used). Calibration samples were prepared with aqueous 226Ra standard in glass 20 ml vials and counted on ultra-low background spectrometer Quantulus 1220TM equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with 226Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide reliable and precise spectra separation. Consequentially, calibration procedure was done through investigation of PSA discriminator level influence on 222Rn efficiency detection, using 226Ra calibration standard in wide range of activity concentrations. Evaluation of presented methods was based on obtained efficiency detections and achieved Minimal Detectable Activity (MDA). Comparison of presented methods, accuracy and precision as well as different scintillation cocktail’s performance was considered from results of measurements of 226Ra spiked water samples with known activity and environmental samples.

Keywords: 222Rn in water, Quantulus1220TM, scintillation cocktail, PSA parameter

Procedia PDF Downloads 175
356 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 158
355 Nonlinear Evolution of the Pulses of Elastic Waves in Geological Materials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus ‘GEOSCAN-02M’. Ultrasonic pulses are excited by the pulses of Q-switched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach-Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock

Procedia PDF Downloads 327
354 Long-Term Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of in-hospital cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditons and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90 mmHg, respiratory rate <8 or >28 breaths per minute, O2 saturation <90%, acute change in conscious state, acute chest pain or worry about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in in-hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in in-hospital cardiac arrest and overall hospital mortality rate, we conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed chi-square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010, 139 calls in 2011 and 245 calls in 2012. The number of ERT calls per 1000 admissions in year 2009-10 was 7.69; 5.61 in 2011 and 9.38 in 2013. The number of code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001 ). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, but we have not found difference in overall hospital mortality rate.

Keywords: cardiac arrest, outcome, in-hospital, ERT

Procedia PDF Downloads 180
353 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.

Keywords: classification, computer vision, convolutional neural networks, drone control

Procedia PDF Downloads 186
352 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 490
351 Factors Associated with Seroconversion of Oral Polio Vaccine among the Children under 5 Year in District Mirpurkhas, Pakistan 2015

Authors: Muhammad Asif Syed, Mirza Amir Baig

Abstract:

Background: Pakistan is one of the two remaining polio-endemic countries, posing a significant public health challenge for global polio eradication due to failure to interrupt polio transmission. Country specific seroprevalence studies help in the evaluation of immunization program performance, the susceptibility of population against polio virus and identification of existing level of immunity with factors that affect seroconversion of the oral polio vaccine (OPV). The objective of the study was to find out factors associated with seroconversion of the OPV among children 6-59 months in Pakistan. Methods: A Hospital based cross-sectional serosurvey was undertaken in May-June 2015 at District Mirpurkhas, Sindh-Pakistan. Total 180 children aged 6–59 months were selected by using systematic random sampling from Muhammad Medical College Hospital, Mirpurkhas. Demographic, vaccination history and risk factors information were collected from the parents/guardian. Blood sample was collected and tested for the detection of poliovirus IgG antibodies by using ELISA Kit. The IgG titer <10 IU/ml, 50 to <150 IU/ml and >150 IU/ml was defined as negative, weak positive and positive immunity respectively. Pearson Chi-square test was used to determine the difference in seroprevalence in univariate analysis. Results: A total of 180 subjects were enrolled mean age was 23 months (7 -59 months). Off these 160 (89%) children were well and 18 (10%) partially protected against polio virus. Two (1.1%) children had no protection against polio virus as they had <10 IU/ml poliovirus IgG antibodies titer. Both negative cases belong from the female gender, age group 12-23 months, urban area and BMI <50 percentile. There was a difference between normal and the wasting children; it did attain statistical significance (χ2= 35.5, p=0.00). The difference in seroconversion was also observed in relation to the gender (χ2=6.23, p=0.04), duration of breast feeding (χ2=18.6, p=0.04), history of diarrheal disease before polio vaccine administration (χ2=7.7, p=0.02), and stunting (χ2= 114, p=0.00). Conclusion: This study demonstrated that near 90% children achieve seroconversion of OPV and well protected against polio virus. There is an urgent need to focus on factors like duration of breast feeding, diarrheal diseases and malnutrition (acute and chronic) among the children as an immunization strategy.

Keywords: seroconversion, oral polio vaccine, Polio, Pakistan

Procedia PDF Downloads 274
350 Multi-Dimensional Experience of Processing Textual and Visual Information: Case Study of Allocations to Places in the Mind’s Eye Based on Individual’s Semantic Knowledge Base

Authors: Joanna Wielochowska, Aneta Wielochowska

Abstract:

Whilst the relationship between scientific areas such as cognitive psychology, neurobiology and philosophy of mind has been emphasized in recent decades of scientific research, concepts and discoveries made in both fields overlap and complement each other in their quest for answers to similar questions. The object of the following case study is to describe, analyze and illustrate the nature and characteristics of a certain cognitive experience which appears to display features of synaesthesia, or rather high-level synaesthesia (ideasthesia). The following research has been conducted on the subject of two authors, monozygotic twins (both polysynaesthetes) experiencing involuntary associations of identical nature. Authors made attempts to identify which cognitive and conceptual dependencies may guide this experience. Operating on self-introduced nomenclature, the described phenomenon- multi-dimensional processing of textual and visual information- aims to define a relationship that involuntarily and immediately couples the content introduced by means of text or image a sensation of appearing in a certain place in the mind’s eye. More precisely: (I) defining a concept introduced by means of textual content during activity of reading or writing, or (II) defining a concept introduced by means of visual content during activity of looking at image(s) with simultaneous sensation of being allocated to a given place in the mind’s eye. A place can be then defined as a cognitive representation of a certain concept. During the activity of processing information, a person has an immediate and involuntary feel of appearing in a certain place themselves, just like a character of a story, ‘observing’ a venue or a scenery from one or more perspectives and angles. That forms a unique and unified experience, constituting a background mental landscape of text or image being looked at. We came to a conclusion that semantic allocations to a given place could be divided and classified into the categories and subcategories and are naturally linked with an individual’s semantic knowledge-base. A place can be defined as a representation one’s unique idea of a given concept that has been established in their semantic knowledge base. A multi-level structure of selectivity of places in the mind’s eye, as a reaction to a given information (one stimuli), draws comparisons to structures and patterns found in botany. Double-flowered varieties of flowers and a whorl system (arrangement) which is characteristic to components of some flower species were given as an illustrative example. A composition of petals that fan out from one single point and wrap around a stem inspired an idea that, just like in nature, in philosophy of mind there are patterns driven by the logic specific to a given phenomenon. The study intertwines terms perceived through the philosophical lens, such as definition of meaning, subjectivity of meaning, mental atmosphere of places, and others. Analysis of this rare experience aims to contribute to constantly developing theoretical framework of the philosophy of mind and influence the way human semantic knowledge base and processing given content in terms of distinguishing between information and meaning is researched.

Keywords: information and meaning, information processing, mental atmosphere of places, patterns in nature, philosophy of mind, selectivity, semantic knowledge base, senses, synaesthesia

Procedia PDF Downloads 105
349 Self-rated Health as a Predictor of Hospitalizations in Patients with Bipolar Disorder and Major Depression: A Prospective Cohort Study of the United Kingdom Biobank

Authors: Haoyu Zhao, Qianshu Ma, Min Xie, Yunqi Huang, Yunjia Liu, Huan Song, Hongsheng Gui, Mingli Li, Qiang Wang

Abstract:

Rationale: Bipolar disorder (BD) and major depressive disorder (MDD), as severe chronic illnesses that restrict patients’ psychosocial functioning and reduce their quality of life, are both categorized into mood disorders. Emerging evidence has suggested that the reliability of self-rated health (SRH) was wellvalidated and that the risk of various health outcomes, including mortality and health care costs, could be predicted by SRH. Compared with other lengthy multi-item patient-reported outcomes (PRO) measures, SRH was proven to have a comparable predictive ability to predict mortality and healthcare utilization. However, to our knowledge, no study has been conducted to assess the association between SRH and hospitalization among people with mental disorders. Therefore, our study aims to determine the association between SRH and subsequent all-cause hospitalizations in patients with BD and MDD. Methods: We conducted a prospective cohort study on people with BD or MDD in the UK from 2006 to 2010 using UK Biobank touchscreen questionnaire data and linked administrative health databases. The association between SRH and 2-year all-cause hospitalizations was assessed using proportional hazard regression after adjustment for sociodemographics, lifestyle behaviors, previous hospitalization use, the Elixhauser comorbidity index, and environmental factors. Results: A total of 29,966 participants were identified, experiencing 10,279 hospitalization events. Among the cohort, the average age was 55.88 (SD 8.01) years, 64.02% were female, and 3,029 (10.11%), 15,972 (53.30%), 8,313 (27.74%), and 2,652 (8.85%) reported excellent, good, fair, and poor SRH, respectively. Among patients reporting poor SRH, 54.19% had a hospitalization event within 2 years compared with 22.65% for those having excellent SRH. In the adjusted analysis, patients with good, fair, and poor SRH had 1.31 (95% CI 1.21-1.42), 1.82 (95% CI 1.68-1.98), and 2.45 (95% CI 2.22, 2.70) higher hazards of hospitalization, respectively, than those with excellent SRH. Conclusion: SRH was independently associated with subsequent all-cause hospitalizations in patients with BD or MDD. This large study facilitates rapid interpretation of SRH values and underscores the need for proactive SRH screening in this population, which might inform resource allocation and enhance high-risk population detection.

Keywords: severe mental illnesses, hospitalization, risk prediction, patient-reported outcomes

Procedia PDF Downloads 141
348 Determination of Cyclic Citrullinated Peptide Antibodies on Quartz Crystal Microbalance Based Nanosensors

Authors: Y. Saylan, F. Yılmaz, A. Denizli

Abstract:

Rheumatoid arthritis (RA) which is the most common autoimmune disorder of the body's own immune system attacking healthy cells. RA has both articular and systemic effects.Until now romatiod factor (RF) assay is used the most commonly diagnosed RA but it is not specific. Anti-cyclic citrullinated peptide (anti-CCP) antibodies are IgG autoantibodies which recognize citrullinated peptides and offer improved specificity in early diagnosis of RA compared to RF. Anti-CCP antibodies have specificity for the diagnosis of RA from 91 to 98% and the sensitivity rate of 41-68%. Molecularly imprinted polymers (MIP) are materials that are easy to prepare, less expensive, stable have a talent for molecular recognition and also can be manufactured in large quantities with good reproducibility. Molecular recognition-based adsorption techniques have received much attention in several fields because of their high selectivity for target molecules. Quartz crystal microbalance (QCM) is an effective, simple, inexpensive approach mass changes that can be converted into an electrical signal. The applications for specific determination of chemical substances or biomolecules, crystal electrodes, cover by the thin films for bind or adsorption of molecules. In this study, we have focused our attention on combining of molecular imprinting into nanofilms and QCM nanosensor approaches and producing QCM nanosensor for anti-CCP, chosen as a model protein, using anti-CCP imprinted nanofilms. For this aim, anti-CCP imprinted QCM nanosensor was characterized by Fourier transform infrared spectroscopy, atomic force microscopy, contact angle measurements and ellipsometry. The non-imprinted nanosensor was also prepared to evaluate the selectivity of the imprinted nanosensor. Anti-CCP imprinted QCM nanosensor was tested for real-time detection of anti-CCP from aqueous solution. The kinetic and affinity studies were determined by using anti-CCP solutions with different concentrations. The responses related with mass shifts (Δm) and frequency shifts (Δf) were used to evaluate adsorption properties and to calculate binding (Ka) and dissociation (Kd) constants. To show the selectivity of the anti-CCP imprinted QCM nanosensor, competitive adsorption of anti-CCP and IgM was investigated.The results indicate that anti-CCP imprinted QCM nanosensor has a higher adsorption capabilities for anti-CCP than for IgM, due to selective cavities in the polymer structure.

Keywords: anti-CCP, molecular imprinting, nanosensor, rheumatoid arthritis, QCM

Procedia PDF Downloads 345
347 Inversion of PROSPECT+SAIL Model for Estimating Vegetation Parameters from Hyperspectral Measurements with Application to Drought-Induced Impacts Detection

Authors: Bagher Bayat, Wouter Verhoef, Behnaz Arabi, Christiaan Van der Tol

Abstract:

The aim of this study was to follow the canopy reflectance patterns in response to soil water deficit and to detect trends of changes in biophysical and biochemical parameters of grass (Poa pratensis species). We used visual interpretation, imaging spectroscopy and radiative transfer model inversion to monitor the gradual manifestation of water stress effects in a laboratory setting. Plots of 21 cm x 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were subjected to water stress for 50 days. In a regular weekly schedule, canopy reflectance was measured. In addition, Leaf Area Index (LAI), Chlorophyll (a+b) content (Cab) and Leaf Water Content (Cw) were measured at regular time intervals. The 1-D bidirectional canopy reflectance model SAIL, coupled with the leaf optical properties model PROSPECT, was inverted using hyperspectral measurements by means of an iterative optimization method to retrieve vegetation biophysical and biochemical parameters. The relationships between retrieved LAI, Cab, Cw, and Cs (Senescent material) with soil moisture content were established in two separated groups; stress and non-stressed. To differentiate the water stress condition from the non-stressed condition, a threshold was defined that was based on the laboratory produced Soil Water Characteristic (SWC) curve. All parameters retrieved by model inversion using canopy spectral data showed good correlation with soil water content in the water stress condition. These parameters co-varied with soil moisture content under the stress condition (Chl: R2= 0.91, Cw: R2= 0.97, Cs: R2= 0.88 and LAI: R2=0.48) at the canopy level. To validate the results, the relationship between vegetation parameters that were measured in the laboratory and soil moisture content was established. The results were totally in agreement with the modeling outputs and confirmed the results produced by radiative transfer model inversion and spectroscopy. Since water stress changes all parts of the spectrum, we concluded that analysis of the reflectance spectrum in the VIS-NIR-MIR region is a promising tool for monitoring water stress impacts on vegetation.

Keywords: hyperspectral remote sensing, model inversion, vegetation responses, water stress

Procedia PDF Downloads 194
346 Human Lens Metabolome: A Combined LC-MS and NMR Study

Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich

Abstract:

Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.

Keywords: cataract, lens, NMR, LC-MS, metabolome

Procedia PDF Downloads 291
345 A Bioinspired Anti-Fouling Coating for Implantable Medical Devices

Authors: Natalie Riley, Anita Quigley, Robert M. I. Kapsa, George W. Greene

Abstract:

As the fields of medicine and bionics grow rapidly in technological advancement, the future and success of it depends on the ability to effectively interface between the artificial and the biological worlds. The biggest obstacle when it comes to implantable, electronic medical devices, is maintaining a ‘clean’, low noise electrical connection that allows for efficient sharing of electrical information between the artificial and biological systems. Implant fouling occurs with the adhesion and accumulation of proteins and various cell types as a result of the immune response to protect itself from the foreign object, essentially forming an electrical insulation barrier that often leads to implant failure over time. Lubricin (LUB) functions as a major boundary lubricant in articular joints, a unique glycoprotein with impressive anti-adhesive properties that self-assembles to virtually any substrate to form a highly ordered, ‘telechelic’ polymer brush. LUB does not passivate electroactive surfaces which makes it ideal, along with its innate biocompatibility, as a coating for implantable bionic electrodes. It is the aim of the study to investigate LUB’s anti-fouling properties and its potential as a safe, bioinspired material for coating applications to enhance the performance and longevity of implantable medical devices as well as reducing the frequency of implant replacement surgeries. Native, bovine-derived LUB (N-LUB) and recombinant LUB (R-LUB) were applied to gold-coated mylar surfaces. Fibroblast, chondrocyte and neural cell types were cultured and grown on the coatings under both passive and electrically stimulated conditions to test the stability and anti-adhesive property of the LUB coating in the presence of an electric field. Lactate dehydrogenase (LDH) assays were conducted as a directly proportional cell population count on each surface along with immunofluorescent microscopy to visualize cells. One-way analysis of variance (ANOVA) with post-hoc Tukey’s test was used to test for statistical significance. Under both passive and electrically stimulated conditions, LUB significantly reduced cell attachment compared to bare gold. Comparing the two coating types, R-LUB reduced cell attachment significantly compared to its native counterpart. Immunofluorescent micrographs visually confirmed LUB’s antiadhesive property, R-LUB consistently demonstrating significantly less attached cells for both fibroblasts and chondrocytes. Preliminary results investigating neural cells have so far demonstrated that R-LUB has little effect on reducing neural cell attachment; the study is ongoing. Recombinant LUB coatings demonstrated impressive anti-adhesive properties, reducing cell attachment in fibroblasts and chondrocytes. These findings and the availability of recombinant LUB brings into question the results of previous experiments conducted using native-derived LUB, its potential not adequately represented nor realized due to unknown factors and impurities that warrant further study. R-LUB is stable and maintains its anti-fouling property under electrical stimulation, making it suitable for electroactive surfaces.

Keywords: anti-fouling, bioinspired, cell attachment, lubricin

Procedia PDF Downloads 103
344 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.

Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor

Procedia PDF Downloads 104
343 Librarian Liaisons: Facilitating Multi-Disciplinary Research for Academic Advancement

Authors: Tracey Woods

Abstract:

In the ever-evolving landscape of academia, the traditional role of the librarian has undergone a remarkable transformation. Once considered as custodians of books and gatekeepers of information, librarians have the potential to take on the vital role of facilitators of cross and inter-disciplinary projects. This shift is driven by the growing recognition of the value of interdisciplinary collaboration in addressing complex research questions in pursuit of novel solutions to real-world problems. This paper shall explore the potential of the academic librarian’s role in facilitating innovative, multi-disciplinary projects, both recognising and validating the vital role that the librarian plays in a somewhat underplayed profession. Academic libraries support teaching, the strengthening of knowledge discourse, and, potentially, the development of innovative practices. As the role of the library gradually morphs from a quiet repository of books to a community-based information hub, a potential opportunity arises. The academic librarian’s role is to build knowledge across a wide span of topics, from the advancement of AI to subject-specific information, and, whilst librarians are generally not offered the research opportunities and funding that the traditional academic disciplines enjoy, they are often invited to help build research in support of the academic. This identifies that one of the primary skills of any 21st-century librarian must be the ability to collaborate and facilitate multi-disciplinary projects. In universities seeking to develop research diversity and academic performance, there is an increasing awareness of the need for collaboration between faculties to enable novel directions and advancements. This idea has been documented and discussed by several researchers; however, there is not a great deal of literature available from recent studies. Having a team based in the library that is adept at creating effective collaborative partnerships is valuable for any academic institution. This paper outlines the development of such a project, initiated within and around an identified library-specific need: the replication of fragile special collections for object-based learning. The research was developed as a multi-disciplinary project involving the faculties of engineering (digital twins lab), architecture, design, and education. Centred around methods for developing a fragile archive into a series of tactile objects furthers knowledge and understanding in both the role of the library as a facilitator of projects, chairing and supporting, alongside contributing to the research process and innovating ideas through the bank of knowledge found amongst the staff and their liaising capabilities. This paper shall present the method of project development from the initiation of ideas to the development of prototypes and dissemination of the objects to teaching departments for analysis. The exact replication of artefacts is also balanced with the adaptation and evolutionary speculations initiated by the design team when adapted as a teaching studio method. The dynamic response required from the library to generate and facilitate these multi-disciplinary projects highlights the information expertise and liaison skills that the librarian possesses. As academia embraces this evolution, the potential for groundbreaking discoveries and innovative solutions across disciplines becomes increasingly attainable.

Keywords: Liaison librarian, multi-disciplinary collaborations, library innovations, librarian stakeholders

Procedia PDF Downloads 41
342 Isolation and Identification of Salmonella spp and Salmonella enteritidis, from Distributed Chicken Samples in the Tehran Province using Culture and PCR Techniques

Authors: Seyedeh Banafsheh Bagheri Marzouni, Sona Rostampour Yasouri

Abstract:

Salmonella is one of the most important common pathogens between humans and animals worldwide. Globally, the prevalence of the disease in humans is due to the consumption of food contaminated with animal-derived Salmonella. These foods include eggs, red meat, chicken, and milk. Contamination of chicken and its products with Salmonella may occur at any stage of the chicken processing chain. Salmonella infection is usually not fatal. However, its occurrence is considered dangerous in some individuals, such as infants, children, the elderly, pregnant women, or individuals with weakened immune systems. If Salmonella infection enters the bloodstream, the possibility of contamination of tissues throughout the body will arise. Therefore, determining the potential risk of Salmonella at various stages is essential from the perspective of consumers and public health. The aim of this study is to isolate and identify Salmonella from chicken samples distributed in the Tehran market using the Gold standard culture method and PCR techniques based on specific genes, invA and ent. During the years 2022-2023, sampling was performed using swabs from the liver and intestinal contents of distributed chickens in the Tehran province, with a total of 120 samples taken under aseptic conditions. The samples were initially enriched in buffered peptone water (BPW) for pre-enrichment overnight. Then, the samples were incubated in selective enrichment media, including TT broth and RVS medium, at temperatures of 37°C and 42°C, respectively, for 18 to 24 hours. Organisms that grew in the liquid medium and produced turbidity were transferred to selective media (XLD and BGA) and incubated overnight at 37°C for isolation. Suspicious Salmonella colonies were selected for DNA extraction, and PCR technique was performed using specific primers that targeted the invA and ent genes in Salmonella. The results indicated that 94 samples were Salmonella using the PCR technique. Of these, 71 samples were positive based on the invA gene, and 23 samples were positive based on the ent gene. Although the culture technique is the Gold standard, PCR is a faster and more accurate method. Rapid detection through PCR can enable the identification of Salmonella contamination in food items and the implementation of necessary measures for disease control and prevention.

Keywords: culture, PCR, salmonella spp, salmonella enteritidis

Procedia PDF Downloads 42
341 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis

Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik

Abstract:

Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.

Keywords: biomarker, glycosylation, lectin, prostate cancer

Procedia PDF Downloads 383
340 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 284