Search results for: leukocyte detection
201 The Effects of Qigong Exercise Intervention on the Cognitive Function in Aging Adults
Authors: D. Y. Fong, C. Y. Kuo, Y. T. Chiang, W. C. Lin
Abstract:
Objectives: Qigong is an ancient Chinese practice in pursuit of a healthier body and a more peaceful mindset. It emphasizes on the restoration of vital energy (Qi) in body, mind, and spirit. The practice is the combination of gentle movements and mild breathing which help the doers reach the condition of tranquility. On account of the features of Qigong, first, we use cross-sectional methodology to compare the differences among the varied levels of Qigong practitioners on cognitive function with event-related potential (ERP) and electroencephalography (EEG). Second, we use the longitudinal methodology to explore the effects on the Qigong trainees for pretest and posttest on ERP and EEG. Current study adopts Attentional Network Test (ANT) task to examine the participants’ cognitive function, and aging-related researches demonstrated a declined tread on the cognition in older adults and exercise might ameliorate the deterioration. Qigong exercise integrates physical posture (muscle strength), breathing technique (aerobic ability) and focused intention (attention) that researchers hypothesize it might improve the cognitive function in aging adults. Method: Sixty participants were involved in this study, including 20 young adults (21.65±2.41 y) with normal physical activity (YA), 20 Qigong experts (60.69 ± 12.42 y) with over 7 years Qigong practice experience (QE), and 20 normal and healthy adults (52.90±12.37 y) with no Qigong practice experience as experimental group (EG). The EG participants took Qigong classes 2 times a week and 2 hours per time for 24 weeks with the purpose of examining the effect of Qigong intervention on cognitive function. ANT tasks (alert network, orient network, and executive control) were adopted to evaluate participants’ cognitive function via ERP’s P300 components and P300 amplitude topography. Results: Behavioral data: 1.The reaction time (RT) of YA is faster than the other two groups, and EG was faster than QE in the cue and flanker conditions of ANT task. 2. The RT of posttest was faster than pretest in EG in the cue and flanker conditions. 3. No difference among the three groups on orient, alert, and execute control networks. ERP data: 1. P300 amplitude detection in QE was larger than EG at Fz electrode in orient, alert, and execute control networks. 2. P300 amplitude in EG was larger at pretest than posttest on the orient network. 3. P300 Latency revealed no difference among the three groups in the three networks. Conclusion: Taken together these findings, they provide neuro-electrical evidence that older adults involved in Qigong practice may develop a more overall compensatory mechanism and also benefit the performance of behavior.Keywords: Qigong, cognitive function, aging, event-related potential (ERP)
Procedia PDF Downloads 393200 Light-Controlled Gene Expression in Yeast
Authors: Peter. M. Kusen, Georg Wandrey, Christopher Probst, Dietrich Kohlheyer, Jochen Buchs, Jorg Pietruszkau
Abstract:
Light as a stimulus provides the capability to develop regulation techniques for customizable gene expression. A great advantage is the extremely flexible and accurate dosing that can be performed in a non invasive and sterile manner even for high throughput technologies. Therefore, light regulation in a multiwell microbioreactor system was realized providing the opportunity to control gene expression with outstanding complexity. A light-regulated gene expression system in Saccharomyces cerevisiae was designed applying the strategy of caged compounds. These compounds are photo-labile protected and therefore biologically inactive regulator molecules which can be reactivated by irradiation with certain light conditions. The “caging” of a repressor molecule which is consumed after deprotection was essential to create a flexible expression system. Thereby, gene expression could be temporally repressed by irradiation and subsequent release of the active repressor molecule. Afterwards, the repressor molecule is consumed by the yeast cells leading to reactivation of gene expression. A yeast strain harboring a construct with the corresponding repressible promoter in combination with a fluorescent marker protein was applied in a Photo-BioLector platform which allows individual irradiation as well as online fluorescence and growth detection. This device was used to precisely control the repression duration by adjusting the amount of released repressor via different irradiation times. With the presented screening platform the regulation of complex expression procedures was achieved by combination of several repression/derepression intervals. In particular, a stepwise increase of temporally-constant expression levels was demonstrated which could be used to study concentration dependent effects on cell functions. Also linear expression rates with variable slopes could be shown representing a possible solution for challenging protein productions, whereby excessive production rates lead to misfolding or intoxication. Finally, the very flexible regulation enabled accurate control over the expression induction, although we used a repressible promoter. Summing up, the continuous online regulation of gene expression has the potential to synchronize gene expression levels to optimize metabolic flux, artificial enzyme cascades, growth rates for co cultivations and many other applications addicted to complex expression regulation. The developed light-regulated expression platform represents an innovative screening approach to find optimization potential for production processes.Keywords: caged-compounds, gene expression regulation, optogenetics, photo-labile protecting group
Procedia PDF Downloads 326199 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes
Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil
Abstract:
Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles
Procedia PDF Downloads 291198 Geospatial Analysis of Spatio-Temporal Dynamic and Environmental Impact of Informal Settlement: A Case of Adama City, Ethiopia
Authors: Zenebu Adere Tola
Abstract:
Informal settlements behave dynamically over space and time and the number of people living in such housing areas is growing worldwide. In the cities of developing countries especially in sub-Saharan Africa, poverty, unemployment rate, poor living condition, lack transparency and accountability, lack of good governance are the major factors to contribute for the people to hold land informally and built houses for residential or other purposes. In most of Ethiopian cities informal settlement is highly seen in peripheral areas this is because people can easily to hold land for housing from local farmers, brokers, speculators without permission from concerning bodies. In Adama informal settlement has created risky living conditions and led to environmental problems in natural areas the main reason for this was the lack of sufficient knowledge about informal settlement development. On the other side there is a strong need to transform informal into formal settlements and to gain more control about the actual spatial development of informal settlements. In another hand to tackle the issue it is at least very important to understand the scale of the problem. To understand the scale of the problem it is important to use up-to-date technology. For this specific problem, it is good to use high-resolution imagery to detect informal settlement in Adama city. The main objective of this study is to assess the spatiotemporal dynamics and environmental impacts of informal settlement using OBIA. Specifically, the objective of this study is to; identify informal settlement in the study area, determine the change in the extent and pattern of informal settlement and to assess the environmental and social impacts of informal settlement in the study area. The methods to be used to detect the informal settlement is object-oriented image analysis. Consequently, reliable procedures for detecting the spatial behavior of informal settlements are required in order to react at an early stage to changing housing situations. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning. Using data for this study aerial photography for growth and change of informal settlements in Adama city. Software ECognition software for classy to built-up and non-built areas. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning.Keywords: informal settlement, change detection, environmental impact, object based analysis
Procedia PDF Downloads 83197 Effect of Human Use, Season and Habitat on Ungulate Densities in Kanha Tiger Reserve
Authors: Neha Awasthi, Ujjwal Kumar
Abstract:
Density of large carnivores is primarily dictated by the density of their prey. Therefore, optimal management of ungulates populations permits harbouring of viable large carnivore populations within protected areas. Ungulate density is likely to respond to regimes of protection and vegetation types. This has generated the need among conservation practitioners to obtain strata specific seasonal species densities for habitat management. Kanha Tiger Reserve (KTR) of 2074 km2 area comprises of two distinct management strata: The core (940 km2), devoid of human settlements and buffer (1134 km2) which is a multiple use area. In general, four habitat strata, grassland, sal forest, bamboo-mixed forest and miscellaneous forest are present in the reserve. Stratified sampling approach was used to access a) impact of human use and b) effect of habitat and season on ungulate densities. Since 2013 to 2016, ungulates were surveyed in winter and summer of each year with an effort of 1200 km walk in 200 spatial transects distributed throughout Kanha Tiger Reserve. We used a single detection function for each species within each habitat stratum for each season for estimating species specific seasonal density, using program DISTANCE. Our key results state that the core area had 4.8 times higher wild ungulate biomass compared with the buffer zone, highlighting the importance of undisturbed area. Chital was found to be most abundant, having a density of 30.1(SE 4.34)/km2 and contributing 33% of the biomass with a habitat preference for grassland. Unlike other ungulates, Gaur being mega herbivore, showed a major seasonal shift in density from bamboo-mixed and sal forest in summer to miscellaneous forest in winter. Maximum diversity and ungulate biomass were supported by grassland followed by bamboo-mixed habitat. Our study stresses the importance of inviolate core areas for achieving high wild ungulate densities and for maintaining populations of endangered and rare species. Grasslands accounts for 9% of the core area of KTR maintained in arrested stage of succession, therefore enhancing this habitat would maintain ungulate diversity, density and cater to the needs of only surviving population of the endangered barasingha and grassland specialist the blackbuck. We show the relevance of different habitat types for differential seasonal use by ungulates and attempt to interpret this in the context of nutrition and cover needs by wild ungulates. Management for an optimal habitat mosaic that maintains ungulate diversity and maximizes ungulate biomass is recommended.Keywords: distance sampling, habitat management, ungulate biomass, diversity
Procedia PDF Downloads 303196 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73195 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 74194 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 216193 Reliability of Clinical Coding in Accurately Estimating the Actual Prevalence of Adverse Drug Event Admissions
Authors: Nisa Mohan
Abstract:
Adverse drug event (ADE) related hospital admissions are common among older people. The first step in prevention is accurately estimating the prevalence of ADE admissions. Clinical coding is an efficient method to estimate the prevalence of ADE admissions. The objective of the study is to estimate the rate of under-coding of ADE admissions in older people in New Zealand and to explore how clinical coders decide whether or not to code an admission as an ADE. There has not been any research in New Zealand to explore these areas. This study is done using a mixed-methods approach. Two common and serious ADEs in older people, namely bleeding and hypoglycaemia were selected for the study. In study 1, eight hundred medical records of people aged 65 years and above who are admitted to hospital due to bleeding and hypoglycemia during the years 2015 – 2016 were selected for quantitative retrospective medical records review. This selection was made to estimate the proportion of ADE-related bleeding and hypoglycemia admissions that are not coded as ADEs. These files were reviewed and recorded as to whether the admission was caused by an ADE. The hospital discharge data were reviewed to check whether all the ADE admissions identified in the records review were coded as ADEs, and the proportion of under-coding of ADE admissions was estimated. In study 2, thirteen clinical coders were selected to conduct qualitative semi-structured interviews using a general inductive approach. Participants were selected purposively based on their experience in clinical coding. Interview questions were designed in a way to investigate the reasons for the under-coding of ADE admissions. The records review study showed that 35% (Cl 28% - 44%) of the ADE-related bleeding admissions and 22% of the ADE-related hypoglycemia admissions were not coded as ADEs. Although the quality of clinical coding is high across New Zealand, a substantial proportion of ADE admissions were under-coded. This shows that clinical coding might under-estimate the actual prevalence of ADE related hospital admissions in New Zealand. The interviews with the clinical coders added that lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing might be the potential reasons for the under-coding of the ADE admissions. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. These results highlight that further work is needed on interventions to improve the clinical coding of ADE admissions, such as providing education to coders about the importance of ADEs, education to clinicians about the importance of clear and confirmed medical records entries, availing pharmacist service to improve the detection and clear documentation of ADE admissions and including a mandatory field in the discharge summary about external causes of diseases.Keywords: adverse drug events, bleeding, clinical coders, clinical coding, hypoglycemia
Procedia PDF Downloads 130192 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 53191 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments
Authors: I. Berger, N. Machour, F. Portet-Koltalo
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments
Procedia PDF Downloads 287190 Genetic Variations of Two Casein Genes among Maghrabi Camels Reared in Egypt
Authors: Othman E. Othman, Amira M. Nowier, Medhat El-Denary
Abstract:
Camels play an important socio-economic role within the pastoral and agricultural system in the dry and semidry zones of Asia and Africa. Camels are economically important animals in Egypt where they are dual purpose animals (meat and milk). The analysis of chemical composition of camel milk showed that the total protein contents ranged from 2.4% to 5.3% and it is divided into casein and whey proteins. The casein fraction constitutes 52% to 89% of total camel milk protein and it divided into 4 fractions namely αs1, αs2, β and κ-caseins which are encoded by four tightly genes. In spite of the important role of casein genes and the effects of their genetic polymorphisms on quantitative traits and technological properties of milk, the studies for the detection of genetic polymorphism of camel milk genes are still limited. Due to this fact, this work focused - using PCR-RFP and sequencing analysis - on the identification of genetic polymorphisms and SNPs of two casein genes in Maghrabi camel breed which is a dual purpose camel breed in Egypt. The amplified fragments at 488-bp of the camel κ-CN gene were digested with AluI endonuclease. The results showed the appearance of three different genotypes in the tested animals; CC with three digested fragments at 203-, 127- and 120-bp, TT with three digested fragments at 203-, 158- and 127-bp and CT with four digested fragments at 203-, 158-, 127- and 120-bp. The frequencies of three detected genotypes were 11.0% for CC, 48.0% for TT and 41.0% for CT genotypes. The sequencing analysis of the two different alleles declared the presence of a single nucleotide polymorphism (C→T) at position 121 in the amplified fragments which is responsible for the destruction of a restriction site (AG/CT) in allele T and resulted in the presence of two different alleles C and T in tested animals. The nucleotide sequences of κ-CN alleles C and T were submitted to GenBank with the accession numbers; KU055605 and KU055606, respectively. The primers used in this study amplified 942-bp fragments spanning from exon 4 to exon 6 of camel αS1-Casein gene. The amplified fragments were digested with two different restriction enzymes; SmlI and AluI. The results of SmlI digestion did not show any restriction site whereas the digestion with AluI endonuclease revealed the presence of two restriction sites AG^CT at positions 68^69 and 631^632 yielding the presence of three digested fragments with sizes 68-, 563- and 293-bp.The nucleotide sequences of this fragment from camel αS1-Casein gene were submitted to GenBank with the accession number KU145820. In conclusion, the genetic characterization of quantitative traits genes which are associated with the production traits like milk yield and composition is considered an important step towards the genetic improvement of livestock species through the selection of superior animals depending on the favorable alleles and genotypes; marker assisted selection (MAS).Keywords: genetic polymorphism, SNP polymorphism, Maghrabi camels, κ-Casein gene, αS1-Casein gene
Procedia PDF Downloads 613189 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 154188 Intelligent Campus Monitoring: YOLOv8-Based High-Accuracy Activity Recognition
Authors: A. Degale Desta, Tamirat Kebamo
Abstract:
Background: Recent advances in computer vision and pattern recognition have significantly improved activity recognition through video analysis, particularly with the application of Deep Convolutional Neural Networks (CNNs). One-stage detectors now enable efficient video-based recognition by simultaneously predicting object categories and locations. Such advancements are highly relevant in educational settings where CCTV surveillance could automatically monitor academic activities, enhancing security and classroom management. However, current datasets and recognition systems lack the specific focus on campus environments necessary for practical application in these settings.Objective: This study aims to address this gap by developing a dataset and testing an automated activity recognition system specifically tailored for educational campuses. The EthioCAD dataset was created to capture various classroom activities and teacher-student interactions, facilitating reliable recognition of academic activities using deep learning models. Method: EthioCAD, a novel video-based dataset, was created with a design science research approach to encompass teacher-student interactions across three domains and 18 distinct classroom activities. Using the Roboflow AI framework, the data was processed, with 4.224 KB of frames and 33.485 MB of images managed for frame extraction, labeling, and organization. The Ultralytics YOLOv8 model was then implemented within Google Colab to evaluate the dataset’s effectiveness, achieving high mean Average Precision (mAP) scores. Results: The YOLOv8 model demonstrated robust activity recognition within campus-like settings, achieving an mAP50 of 90.2% and an mAP50-95 of 78.6%. These results highlight the potential of EthioCAD, combined with YOLOv8, to provide reliable detection and classification of classroom activities, supporting automated surveillance needs on educational campuses. Discussion: The high performance of YOLOv8 on the EthioCAD dataset suggests that automated activity recognition for surveillance is feasible within educational environments. This system addresses current limitations in campus-specific data and tools, offering a tailored solution for academic monitoring that could enhance the effectiveness of CCTV systems in these settings. Conclusion: The EthioCAD dataset, alongside the YOLOv8 model, provides a promising framework for automated campus activity recognition. This approach lays the groundwork for future advancements in CCTV-based educational surveillance systems, enabling more refined and reliable monitoring of classroom activities.Keywords: deep CNN, EthioCAD, deep learning, YOLOv8, activity recognition
Procedia PDF Downloads 10187 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive
Authors: Eitan Elaad, Yeela Gal-Gonen
Abstract:
Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability
Procedia PDF Downloads 148186 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 105185 Kidnapping of Migrants by Drug Cartels in Mexico as a New Trend in Contemporary Slavery
Authors: Itze Coronel Salomon
Abstract:
The rise of organized crime and violence related to drug cartels in Mexico has created serious challenges for the authorities to provide security to those who live within its borders. However, to achieve a significant improvement in security is absolute respect for fundamental human rights by the authorities. Irregular migrants in Mexico are at serious risk of abuse. Research by Amnesty International as well as reports of the NHRC (National Human Rights) in Mexico, have indicated the major humanitarian crisis faced by thousands of migrants traveling in the shadows. However, the true extent of the problem remains invisible to the general population. The fact that federal and state governments leave no proper record of abuse and do not publish reliable data contributes to ignorance and misinformation, often spread by the media that portray migrants as the source of crime rather than their victims. Discrimination and intolerance against irregular migrants can generate greater hostility and exclusion. According to the modus operandi that has been recorded criminal organizations and criminal groups linked to drug trafficking structures deprive migrants of their liberty for forced labor and illegal activities related to drug trafficking, even some have been kidnapped for be trained as murderers . If the victim or their family cannot pay the ransom, the kidnapped person may suffer torture, mutilation and amputation of limbs or death. Migrant women are victims of sexual abuse during her abduction as well. In 2011, at least 177 bodies were identified in the largest mass grave found in Mexico, located in the town of San Fernando, in the border state of Tamaulipas, most of the victims were killed by blunt instruments, and most seemed to be immigrants and travelers passing through the country. With dozens of small graves discovered in northern Mexico, this may suggest a change in tactics between organized crime groups to the different means of obtaining revenue and reduce murder profile methods. Competition and conflict over territorial control drug trafficking can provide strong incentives for organized crime groups send signals of violence to the authorities and rival groups. However, as some Mexican organized crime groups are increasingly looking to take advantage of income and vulnerable groups, such as Central American migrants seem less interested in advertising his work to authorities and others, and more interested in evading detection and confrontation. This paper pretends to analyze the introduction of this new trend of kidnapping migrants for forced labors by drug cartels in Mexico into the forms of contemporary slavery and its implications.Keywords: international law, migration, transnational organized crime
Procedia PDF Downloads 416184 Direct Assessment of Cellular Immune Responses to Ovalbumin with a Secreted Luciferase Transgenic Reporter Mouse Strain IFNγ-Lucia
Authors: Martyna Chotomska, Aleksandra Studzinska, Marta Lisowska, Justyna Szubert, Aleksandra Tabis, Jacek Bania, Arkadiusz Miazek
Abstract:
Objectives: Assessing antigen-specific T cell responses is of utmost importance for the pre-clinical testing of prototype vaccines against intracellular pathogens and tumor antigens. Mainly two types of in vitro assays are used for this purpose 1) enzyme-linked immunospot (ELISpot) and 2) intracellular cytokine staining (ICS). Both are time-consuming, relatively expensive, and require manual dexterity. Here, we assess if a straightforward detection of luciferase activity in blood samples of transgenic reporter mice expressing a secreted Lucia luciferase under the transcriptional control of IFN-γ promoter parallels the sensitivity of IFNγ ELISpot assay. Methods: IFN-γ-LUCIA mouse strain carrying multiple copies of Lucia luciferase transgene under the transcriptional control of IFNγ minimal promoter were generated by pronuclear injection of linear DNA. The specificity of transgene expression and mobilization was assessed in vitro using transgenic splenocytes exposed to various mitogens. The IFN-γ-LUCIA mice were immunized with 50mg of ovalbumin (OVA) emulsified in incomplete Freund’s adjuvant three times every two weeks by subcutaneous injections. Blood samples were collected before and five days after each immunization. Luciferase activity was assessed in blood serum. Peripheral blood mononuclear cells were separated and assessed for frequencies of OVA-specific IFNγ-secreting T cells. Results: We show that in vitro cultured splenocytes of IFN-γ-LUCIA mice respond by 2 and 3 fold increase in secreted luciferase activity to T cell mitogens concanavalin A and phorbol myristate acetate, respectively but fail to respond to B cell-stimulating E.coli lipopolysaccharide. Immunization of IFN-γ-LUCIA mice with OVA leads to over 4 fold increase in luciferase activity in blood serum five days post-immunization with a barely detectable increase in OVA-specific, IFNγ-secreting T cells by ELISpot. Second and third immunizations, further increase the luciferase activity and coincidently also increase the frequencies of OVA-specific T cells by ELISpot. Conclusions: We conclude that minimally invasive monitoring of luciferase secretions in blood serum of IFN-γ-LUCIA mice constitutes a sensitive method for evaluating primary and memory Th1 responses to protein antigens. As such, this method may complement existing methods for rapid immunogenicity assessment of prototype vaccines.Keywords: ELISpot, immunogenicity, interferon-gamma, reporter mice, vaccines
Procedia PDF Downloads 170183 Characterization of Fine Particles Emitted by the Inland and Maritime Shipping
Authors: Malika Souada, Juanita Rausch, Benjamin Guinot, Christine Bugajny
Abstract:
The increase of global commerce and tourism makes the shipping sector an important contributor of atmospheric pollution. Both, airborne particles and gaseous pollutants have negative impact on health and climate. This is especially the case in port cities, due to the proximity of the exposed population to the shipping emissions in addition to other multiple sources of pollution linked to the surrounding urban activity. The objective of this study is to determine the concentrations of fine particles (immission), specifically PM2.5, PM1, PM0.3, BC and sulphates, in a context where maritime passenger traffic plays an important role (port area of Bordeaux centre). The methodology is based on high temporal resolution measurements of pollutants, correlated with meteorological and ship movements data. Particles and gaseous pollutants from seven maritime passenger ships were sampled and analysed during the docking, manoeuvring and berthing phases. The particle mass measurements were supplemented by measurements of the number concentration of ultrafine particles (<300 nm diameter). The different measurement points were chosen by taking into account the local meteorological conditions and by pre-modelling the dispersion of the smoke plumes. The results of the measurement campaign carried out during the summer of 2021 in the port of Bordeaux show that the detection of concentrations of particles emitted by ships proved to be punctual and stealthy. Punctual peaks of ultrafine particle concentration in number (P#/m3) and BC (ng/m3) were measured during the docking phases of the ships, but the concentrations returned to their background level within minutes. However, it appears that the influence of the docking phases does not significantly affect the air quality of Bordeaux centre in terms of mass concentration. Additionally, no clear differences in PM2.5 concentrations between the periods with and without ships at berth were observed. The urban background pollution seems to be mainly dominated by exhaust and non-exhaust road traffic emissions. However, temporal high-resolution measurements suggest a probable emission of gaseous precursors responsible for the formation of secondary aerosols related to the ship activities. This was evidenced by the high values of the PM1/BC and PN/BC ratios, tracers of non-primary particle formation, during periods of ship berthing vs. periods without ships at berth. The research findings from this study provide robust support for port area air quality assessment and source apportionment.Keywords: characterization, fine particulate matter, harbour air quality, shipping impacts
Procedia PDF Downloads 104182 Energy Atlas: Geographic Information Systems-Based Energy Analysis and Planning Tool
Authors: Katarina Pogacnik, Ursa Zakrajsek, Nejc Sirk, Ziga Lampret
Abstract:
Due to an increase in living standards along with global population growth and a trend of urbanization, municipalities and regions are faced with an ever rising energy demand. A challenge has arisen for cities around the world to modify the energy supply chain in order to reduce its consumption and CO₂ emissions. The aim of our work is the development of a computational-analytical platform for dynamic support in decision-making and the determination of economic and technical indicators of energy efficiency in a smart city, named Energy Atlas. Similar products in this field focuse on a narrower approach, whereas in order to achieve its aim, this platform encompasses a wider spectrum of beneficial and important information for energy planning on a local or regional scale. GIS based interactive maps provide an extensive database on the potential, use and supply of energy and renewable energy sources along with climate, transport and spatial data of the selected municipality. Beneficiaries of Energy atlas are local communities, companies, investors, contractors as well as residents. The Energy Atlas platform consists of three modules named E-Planning, E-Indicators and E-Cooperation. The E-Planning module is a comprehensive data service, which represents a support towards optimal decision-making and offers a sum of solutions and feasibility of measures and their effects in the area of efficient use of energy and renewable energy sources. The E-Indicators module identifies, collects and develops optimal data and key performance indicators and develops an analytical application service for dynamic support in managing a smart city in regards to energy use and sustainable environment. In order to support cooperation and direct involvement of citizens of the smart city, the E-cooperation is developed with the purpose of integrating the interdisciplinary and sociological aspects of energy end-users. Interaction of all the above-described modules contributes to regional development because it enables for a precise assessment of the current situation, strategic planning, detection of potential future difficulties and also the possibility of public involvement in decision-making. From the implementation of the technology in Slovenian municipalities of Ljubljana, Piran, and Novo mesto, there is evidence to suggest that the set goals are to be achieved to a great extent. Such thorough urban energy planning tool is viewed as an important piece of the puzzle towards achieving a low-carbon society, circular economy and therefore, sustainable society.Keywords: circular economy, energy atlas, energy management, energy planning, low-carbon society
Procedia PDF Downloads 305181 Generating a Multiplex Sensing Platform for the Accurate Diagnosis of Sepsis
Authors: N. Demertzis, J. L. Bowen
Abstract:
Sepsis is a complex and rapidly evolving condition, resulting from uncontrolled prolonged activation of host immune system due to pathogenic insult. The aim of this study is the development of a multiplex electrochemical sensing platform, capable of detecting both pathogen associated and host immune markers to enable the rapid and definitive diagnosis of sepsis. A combination of aptamers and molecular imprinting approaches have been employed to generate sensing systems for lipopolysaccharide (LPS), c-reactive protein (CRP) and procalcitonin (PCT). Gold working electrodes were mechanically polished and electrochemically cleaned with 0.1 M sulphuric acid using cyclic voltammetry (CV). Following activation, a self-assembled monolayer (SAM) was generated, by incubating the electrodes with a thiolated anti-LPS aptamer / dithiodibutiric acid (DTBA) mixture (1:20). 3-aminophenylboronic acid (3-APBA) in combination with the anti-LPS aptamer was used for the development of the hybrid molecularly imprinted sensor (apta-MIP). Aptasensors, targeting PCT and CRP were also fabricated, following the same approach as in the case of LPS, with mercaptohexanol (MCH) replacing DTBA. In the case of the CRP aptasensor, the SAM was formed following incubation of a 1:1 aptamer: MCH mixture. However, in the case of PCT, the SAM was formed with the aptamer itself, with subsequent backfilling with 1 μM MCH. The binding performance of all systems has been evaluated using electrochemical impedance spectroscopy. The apta-MIP’s polymer thickness is controlled by varying the number of electropolymerisation cycles. In the ideal number of polymerisation cycles, the polymer must cover the electrode surface and create a binding pocket around LPS and its aptamer binding site. Less polymerisation cycles will create a hybrid system which resembles an aptasensor, while more cycles will be able to cover the complex and demonstrate a bulk polymer-like behaviour. Both aptasensor and apta-MIP were challenged with LPS and compared to conventional imprinted (absence of aptamer from the binding site, polymer formed in presence of LPS) and non-imprinted polymers (NIPS, absence of LPS whilst hybrid polymer is formed). A stable LPS aptasensor, capable of detecting down to 5 pg/ml of LPS was generated. The apparent Kd of the system was estimated at 17 pM, with a Bmax of approximately 50 pM. The aptasensor demonstrated high specificity to LPS. The apta-MIP demonstrated superior recognition properties with a limit of detection of 1 fg/ml and a Bmax of 100 pg/ml. The CRP and PCT aptasensors were both able to detect down to 5 pg/ml. Whilst full binding performance is currently being evaluated, there is none of the sensors demonstrate cross-reactivity towards LPS, CRP or PCT. In conclusion, stable aptasensors capable of detecting LPS, PCT and CRP at low concentrations have been generated. The realisation of a multiplex panel such as described herein, will effectively contribute to the rapid, personalised diagnosis of sepsis.Keywords: aptamer, electrochemical impedance spectroscopy, molecularly imprinted polymers, sepsis
Procedia PDF Downloads 125180 Basal Cell Carcinoma: Epidemiological Analysis of a 5-Year Period in a Brazilian City with a High Level of Solar Radiation
Authors: Maria E. V. Amarante, Carolina L. Cerdeira, Julia V. Cortes, Fiorita G. L. Mundim
Abstract:
Basal cell carcinoma (BCC) is the most prevalent type of skin cancer in humans. It arises from the basal cells of the epidermis and cutaneous appendages. The role of sunlight exposure as a risk factor for BCC is very well defined due to its power to influence genetic mutations, in addition to having a suppressor effect on the skin immune system. Despite showing low metastasis and mortality rates, the tumor is locally infiltrative, aggressive, and destructive. Considering the high prevalence rate of this carcinoma and the importance of early detection, a retrospective study was carried out in order to correlate the clinical data available on BBC, characterize it epidemiologically, and thus enable effective prevention measures for the population. Data on the period from January 2015 to December 2019 were collected from the medical records of patients registered at one pathology service located in the southeast region of Brazil, known as SVO, which delivers skin biopsy results. The study was aimed at correlating the variables, sex, age, and subtypes found. Data analysis was performed using the chi-square test at a nominal significance level of 5% in order to verify the independence between the variables of interest. Fisher's exact test was applied in cases where the absolute frequency in the cells of the contingency table was less than or equal to five. The statistical analysis was performed using the R® software. Ninety-three basal cell carcinoma were analyzed, and its frequency in the 31-to 45-year-old age group was 5.8 times higher in men than in women, whereas, from 46 to 59 years, the frequency was found 2.4 times higher in women than in men. Between the ages of 46 to 59 years, it should be noted that the sclerodermiform subtype appears more than the solid one, with a difference of 7.26 percentage points. Reversely, the solid form appears more frequently in individuals aged 60 years or more, with a difference of 8.57 percentage points. Among women, the frequency of the solid subtype was 9.93 percentage points higher than the sclerodermiform frequency. In males, the same percentage difference is observed, but sclerodermiform is the most prevalent subtype. It is concluded in this study that, in general, there is a predominance of basal cell carcinoma in females and in individuals aged 60 years and over, which demonstrates the tendency of this tumor. However, when rarely found in younger individuals, the male gender prevailed. The most prevalent subtype was the solid one. It is worth mentioning that the sclerodermiform subtype, which is more aggressive, was seen more frequently in males and in the 46-to 59-year-old range.Keywords: basal cell carcinoma, epidemiology, sclerodermiform basal cell carcinoma, skin cancer, solar radiation, solid basal cell carcinoma
Procedia PDF Downloads 139179 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 94178 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery
Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats
Abstract:
Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform
Procedia PDF Downloads 456177 Lamivudine Continuation/Tenofovir Add-on Adversely Affects Treatment Response among Lamivudine Non-Responder HIV-HBV Co-Infected Patients from Eastern India
Authors: Ananya Pal, Neelakshi Sarkar, Debraj Saha, Dipanwita Das, Subhashish Kamal Guha, Bibhuti Saha, Runu Chakravarty
Abstract:
Presently, tenofovir disoproxil fumurate (TDF) is the most effective anti-viral agent for the treatment of hepatitis B virus (HBV) in individuals co-infected with HIV and HBV as TDF has activity to suppress both wild-type and lamivudine (3TC)-resistant HBV. However, suboptimal response to TDF was reported in HIV-HBV co-infected individuals with prior 3TC therapy from different countries recently. The incidence of 3TC-resistant HBV strains is quite high in HIV-HBV co-infected patients experiencing long-term anti-retroviral therapy (ART) in eastern India. In spite of this risk, most of the patients with long-term 3TC treatment are continued with the same anti-viral agent in this country. Only a few have received TDF in addition to 3TC in the ART regimen since TDF has been available in India for the treatment of HIV-infected patients in 2012. In this preliminary study, we investigated the virologic and biochemical parameters among HIV-HBV co-infected patients who are non-responders to 3TC treatment during the continuation of 3TC or TDF add-on to 3TC in their ART regimen. Fifteen HIV-HBV co-infected patients who experienced long-term 3TC (mean duration months 36.87 ± 24.08 months) were identified with high HBV viremia ( > 20,000 IU/ml) or harbouring 3TC-resistant HBV. These patients receiving ART from School of Tropical Medicine Kolkata, the main ART centre in eastern India were followed-up semi-annually for next three visits. Different virologic parameters including quantification of plasma HBV load by real-time PCR, detection of hepatitis B e antigen (HBeAg) by commercial ELISA and anti-viral resistant mutations by sequencing were studied. During three follow-up among study subjects, 86%, 47%, and 43% had 3TC-mono-therapy (mean treatment-duration 41.54±18.84, 49.67±11.67, 54.17±12.37 months respectively) whereas 14%, 53%, and 57% experienced TDF in addition to 3TC (mean treatment duration 4.5±2.12, 16.56±11.06, and 23±4.07 months respectively). Mean CD4 cell-count in patients receiving 3TC was tended to be lower during third follow-up as compared to the first and the second [520.67±380.30 (1st), 454.8±196.90 (2nd), and 397.5±189.24 (3rd) cells/mm3) and similar trend was seen in patients experiencing TDF in addition to 3TC [334.5±330.218 (1st), 476.5±194.25 (2nd), and 461.17±269.89 (3rd) cells/mm3]. Serum HBV load was increased during successive follow-up of patients with 3TC-mono-therapy. Initiation of TDF lowered serum HBV-load among 3TC-non-responders at the time of second visit ( < 2,000 IU/ml), interestingly during third follow-up, mean HBV viremia increased >1 log IU/ml (mean 3.56±2.84 log IU/ml). Persistence of 3TC-resistant double and triple mutations was also observed in both the treatment regimens. Mean serum alanine aminotransferase remained elevated in these patients during this follow-up study. Persistence of high HBV viraemia and 3TC-resistant mutation in HBV during the continuation of 3TC might lead to major public health threat in India. The inclusion of TDF in the ART regimen of 3TC non-responder HIV-HBV co-infected patients showed adverse treatment response in terms of virologic and biochemical parameters. Therefore, serious attention is necessary for proper management of long-term 3TC experienced HIV-HBV co-infected patients with high HBV viraemia or 3TC-resistant HBV mutants in India.Keywords: HBV, HIV, TDF, 3TC-resistant
Procedia PDF Downloads 374176 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 231175 Innovative Technologies Functional Methods of Dental Research
Authors: Sergey N. Ermoliev, Margarita A. Belousova, Aida D. Goncharenko
Abstract:
Application of the diagnostic complex of highly informative functional methods (electromyography, reodentography, laser Doppler flowmetry, reoperiodontography, vital computer capillaroscopy, optical tissue oximetry, laser fluorescence diagnosis) allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment. Introduction. It is necessary to create a complex of innovative highly informative and safe functional diagnostic methods for improvement of the quality of patient treatment by the early detection of stomatologic diseases. The purpose of the present study was to investigate the etiology and pathogenesis of functional disorders identified in the pathology of hard tissue, dental pulp, periodontal, oral mucosa and chewing function, and the creation of new approaches to the diagnosis of dental diseases. Material and methods. 172 patients were examined. Density of hard tissues of the teeth and jaw bone was studied by intraoral ultrasonic densitometry (USD). Electromyographic activity of masticatory muscles was assessed by electromyography (EMG). Functional state of dental pulp vessels assessed by reodentography (RDG) and laser Doppler flowmetry (LDF). Reoperiodontography method (RPG) studied regional blood flow in the periodontal tissues. Microcirculatory vascular periodontal studied by vital computer capillaroscopy (VCC) and laser Doppler flowmetry (LDF). The metabolic level of the mucous membrane was determined by optical tissue oximetry (OTO) and laser fluorescence diagnosis (LFD). Results and discussion. The results obtained revealed changes in mineral density of hard tissues of the teeth and jaw bone, the bioelectric activity of masticatory muscles, regional blood flow and microcirculation in the dental pulp and periodontal tissues. LDF and OTO methods estimated fluctuations of saturation level and oxygen transport in microvasculature of periodontal tissues. With LFD identified changes in the concentration of enzymes (nicotinamide, flavins, lipofuscin, porphyrins) involved in metabolic processes Conclusion. Our preliminary results confirmed feasibility and safety the of intraoral ultrasound densitometry technique in the density of bone tissue of periodontium. Conclusion. Application of the diagnostic complex of above mentioned highly informative functional methods allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment.Keywords: electromyography (EMG), reodentography (RDG), laser Doppler flowmetry (LDF), reoperiodontography method (RPG), vital computer capillaroscopy (VCC), optical tissue oximetry (OTO), laser fluorescence diagnosis (LFD)
Procedia PDF Downloads 280174 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 181173 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224172 Finding the Association Rule between Nursing Interventions and Early Evaluation Results of In-Hospital Cardiac Arrest to Improve Patient Safety
Authors: Wei-Chih Huang, Pei-Lung Chung, Ching-Heng Lin, Hsuan-Chia Yang, Der-Ming Liou
Abstract:
Background: In-Hospital Cardiac Arrest (IHCA) threaten life of the inpatients, cause serious effect to patient safety, quality of inpatients care and hospital service. Health providers must identify the signs of IHCA early to avoid the occurrence of IHCA. This study will consider the potential association between early signs of IHCA and the essence of patient care provided by nurses and other professionals before an IHCA occurs. The aim of this study is to identify significant associations between nursing interventions and abnormal early evaluation results of IHCA that can assist health care providers in monitoring inpatients at risk of IHCA to increase opportunities of IHCA early detection and prevention. Materials and Methods: This study used one of the data mining techniques called association rules mining to compute associations between nursing interventions and abnormal early evaluation results of IHCA. The nursing interventions and abnormal early evaluation results of IHCA were considered to be co-occurring if nursing interventions were provided within 24 hours of last being observed in abnormal early evaluation results of IHCA. The rule based methods were utilized 23.6 million electronic medical records (EMR) from a medical center in Taipei, Taiwan. This dataset includes 733 concepts of nursing interventions that coded by clinical care classification (CCC) codes and 13 early evaluation results of IHCA with binary codes. The values of interestingness and lift were computed as Q values to measure the co-occurrence and associations’ strength between all in-hospital patient care measures and abnormal early evaluation results of IHCA. The associations were evaluated by comparing the results of Q values and verified by medical experts. Results and Conclusions: The results show that there are 4195 pairs of associations between nursing interventions and abnormal early evaluation results of IHCA with their Q values. The indication of positive association is 203 pairs with Q values greater than 5. Inpatients with high blood sugar level (hyperglycemia) have positive association with having heart rate lower than 50 beats per minute or higher than 120 beats per minute, Q value is 6.636. Inpatients with temporary pacemaker (TPM) have significant association with high risk of IHCA, Q value is 47.403. There is significant positive correlation between inpatients with hypovolemia and happened abnormal heart rhythms (arrhythmias), Q value is 127.49. The results of this study can help to prevent IHCA from occurring by making health care providers early recognition of inpatients at risk of IHCA, assist with monitoring patients for providing quality of care to patients, improve IHCA surveillance and quality of in-hospital care.Keywords: in-hospital cardiac arrest, patient safety, nursing intervention, association rule mining
Procedia PDF Downloads 271