Search results for: genetic variation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3792

Search results for: genetic variation

222 Placement Characteristics of Major Stream Vehicular Traffic at Median Openings

Authors: Tathagatha Khan, Smruti Sourava Mohapatra

Abstract:

Median openings are provided in raised median of multilane roads to facilitate U-turn movement. The U-turn movement is a highly complex and risky maneuver because U-turning vehicle (minor stream) makes 180° turns at median openings and merge with the approaching through traffic (major stream). A U-turning vehicle requires a suitable gap in the major stream to merge, and during this process, the possibility of merging conflict develops. Therefore, these median openings are potential hot spot of conflict and posses concern pertaining to safety. The traffic at the median openings could be managed efficiently with enhanced safety when the capacity of a traffic facility has been estimated correctly. The capacity of U-turns at median openings is estimated by Harder’s formula, which requires three basic parameters namely critical gap, follow up time and conflict flow rate. The estimation of conflicting flow rate under mixed traffic condition is very much complicated due to absence of lane discipline and discourteous behavior of the drivers. The understanding of placement of major stream vehicles at median opening is very much important for the estimation of conflicting traffic faced by U-turning movement. The placement data of major stream vehicles at different section in 4-lane and 6-lane divided multilane roads were collected. All the test sections were free from the effect of intersection, bus stop, parked vehicles, curvature, pedestrian movements or any other side friction. For the purpose of analysis, all the vehicles were divided into 6 categories such as motorized 2W, autorickshaw (3-W), small car, big car, light commercial vehicle, and heavy vehicle. For the collection of placement data of major stream vehicles, the entire road width was divided into sections of 25 cm each and these were numbered seriatim from the pavement edge (curbside) to the end of the road. The placement major stream vehicle crossing the reference line was recorded by video graphic technique on various weekdays. The collected data for individual category of vehicles at all the test sections were converted into a frequency table with a class interval of 25 cm each and the placement frequency curve. Separate distribution fittings were tried for 4- lane and 6-lane divided roads. The variation of major stream traffic volume on the placement characteristics of major stream vehicles has also been explored. The findings of this study will be helpful to determine the conflict volume at the median openings. So, the present work holds significance in traffic planning, operation and design to alleviate the bottleneck, prospect of collision and delay at median opening in general and at median opening in developing countries in particular.

Keywords: median opening, U-turn, conflicting traffic, placement, mixed traffic

Procedia PDF Downloads 115
221 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin

Procedia PDF Downloads 98
220 A Model for Language Intervention: Toys & Picture-Books as Early Pedagogical Props for the Transmission of Lazuri

Authors: Peri Ozlem Yuksel-Sokmen, Irfan Cagtay

Abstract:

Oral languages are destined to disappear rapidly in the absence of interventions aimed at encouraging their usage by young children. The seminal language preservation model proposed by Fishman (1991) stresses the importance of multiple generations using the endangered L1 while engaged in daily routines with younger children. Over the last two decades Fishman (2001) has used his intergenerational transmission model in documenting the revitalization of Basque languages, providing evidence that families are transmitting Euskara as a first language to their children with success. In our study, to motivate usage of Lazuri, we asked caregivers to speak the language while engaged with their toddlers (12 to 48 months) in semi-structured play, and included both parents (N=32) and grandparents (N=30) as play partners. This unnatural prompting to speak only in Lazuri was greeted with reluctance, as 90% of our families indicated that they had stopped using Lazuri with their children. Nevertheless, caregivers followed instructions and produced 67% of their utterances in Lazuri, with another 14% of utterances using a combination of Lazuri and Turkish (Codeswitch). Although children spoke mostly in Turkish (83% of utterances), frequencies of caregiver utterances in Lazuri or Codeswitch predicted the extent to which their children used the minority language in return. This trend suggests that home interventions aimed at encouraging dyads to communicate in a non-preferred, endangered language can effectively increase children’s usage of the language. Alternatively, this result suggests than any use of the minority language on the part of the children will promote its further usage by caregivers. For researchers examining links between play, culture, and child development, structured play has emerged as a critical methodology (e.g., Frost, Wortham, Reifel, 2007, Lilliard et al., 2012; Sutton-Smith, 1986; Gaskins & Miller, 2009), allowing investigation of cultural and individual variation in parenting styles, as well as the role of culture in constraining the affordances of toys. Toy props, as well as picture-books in native languages, can be used as tools in the transmission and preservation of endangered languages by allowing children to explore adult roles through enactment of social routines and conversational patterns modeled by caregivers. Through adult-guided play children not only acquire scripts for culturally significant activities, but also develop skills in expressing themselves in culturally relevant ways that may continue to develop over their lives through community engagement. Further pedagogical tools, such as language games and e-learning, will be discussed in this proposed oral talk.

Keywords: language intervention, pedagogical tools, endangered languages, Lazuri

Procedia PDF Downloads 305
219 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 209
218 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution

Authors: Peter G. Hollis, Kim G. Clarke

Abstract:

Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.

Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag

Procedia PDF Downloads 246
217 Tool Development for Assessing Antineoplastic Drugs Surface Contamination in Healthcare Services and Other Workplaces

Authors: Benoit Atge, Alice Dhersin, Oscar Da Silva Cacao, Beatrice Martinez, Dominique Ducint, Catherine Verdun-Esquer, Isabelle Baldi, Mathieu Molimard, Antoine Villa, Mireille Canal-Raffin

Abstract:

Introduction: Healthcare workers' exposure to antineoplastic drugs (AD) is a burning issue for occupational medicine practitioners. Biological monitoring of occupational exposure (BMOE) is an essential tool for assessing AD contamination of healthcare workers. In addition to BMOE, surface sampling is a useful tool in order to understand how workers get contaminated, to identify sources of environmental contamination, to verify the effectiveness of surface decontamination way and to ensure monitoring of these surfaces. The objective of this work was to develop a complete tool including a kit for surface sampling and a quantification analytical method for AD traces detection. The development was realized with the three following criteria: the kit capacity to sample in every professional environment (healthcare services, veterinaries, etc.), the detection of very low AD traces with a validated analytical method and the easiness of the sampling kit use regardless of the person in charge of sampling. Material and method: AD mostly used in term of quantity and frequency have been identified by an analysis of the literature and consumptions of different hospitals, veterinary services, and home care settings. The kind of adsorbent device, surface moistening solution and mix of solvents for the extraction of AD from the adsorbent device have been tested for a maximal yield. The AD quantification was achieved by an ultra high-performance liquid chromatography method coupled with tandem mass spectrometry (UHPLC-MS/MS). Results: With their high frequencies of use and their good reflect of the diverse activities through healthcare, 15 AD (cyclophosphamide, ifosfamide, doxorubicin, daunorubicin, epirubicin, 5-FU, dacarbazin, etoposide, pemetrexed, vincristine, cytarabine, methothrexate, paclitaxel, gemcitabine, mitomycin C) were selected. The analytical method was optimized and adapted to obtain high sensitivity with very low limits of quantification (25 to 5000ng/mL), equivalent or lowest that those previously published (for 13/15 AD). The sampling kit is easy to use, provided with a didactic support (online video and protocol paper). It showed its effectiveness without inter-individual variation (n=5/person; n= 5 persons; p=0,85; ANOVA) regardless of the person in charge of sampling. Conclusion: This validated tool (sampling kit + analytical method) is very sensitive, easy to use and very didactic in order to control the chemical risk brought by AD. Moreover, BMOE permits a focal prevention. Used in routine, this tool is available for every intervention of occupational health.

Keywords: surface contamination, sampling kit, analytical method, sensitivity

Procedia PDF Downloads 111
216 A Geochemical Perspective on A-Type Granites of Khanak and Devsar Areas, Haryana, India: Implications for Petrogenesis

Authors: Naresh Kumar, Radhika Sharma, A. K. Singh

Abstract:

Granites from Khanak and Devsar areas, a part of Malani Igneous Suite (MIS) were investigated for their geochemical characteristics to understand the petrogenetic aspect of the research area. Neoproterozoic rocks of MIS are well exposed in Jhunjhunu, Jodhpur, Pali, Barmer, Jalor, Jaisalmer districts of Rajasthan and Bhiwani district of Haryana and also occur at Kirana hills of Pakistan. The MIS predominantly consists of acidic volcanic with acidic plutonic (granite of various types), mafic volcanic, mafic intrusive and minor amount of pyroclasts. Based on the field and petrographical studies, 28 samples were selected and analyzed for geochemical analysis of major, trace and rare earth elements at the Wadia Institute of Himalayan Geology, Dehradun by X-Ray Fluorescence Spectrometer (XRF) and ICP-MS (Inductively Coupled Plasma- Mass Spectrometry). Granites from the studied areas are categorized as grey, green and pink. Khanak granites consist of quartz, k-feldspar, plagioclase, and biotite as essential minerals and hematite, zircon, annite, monazite & rutile as accessory minerals. In Devsar granites, plagioclase is replaced by perthite and occurs as dominantly. Geochemically, granites from Khanak and Devsar areas exhibit typical A-type granites characteristics with their enrichment in SiO2, Na2O+K2O, Fe/Mg, Rb, Zr, Y, Th, U, REE (except Eu) and significant depletion in MgO, CaO, Sr, P, Ti, Ni, Cr, V and Eu suggested about A-type affinities in Northwestern Peninsular India. The amount of heat production (HP) in green and grey granites of Devsar area varies upto 9.68 & 11.70 μWm-3 and total heat generation unit (HGU) i.e. 23.04 & 27.86 respectively. Pink granites of Khanak area display a higher enrichment of HP (16.53 μWm-3) and HGU (39.37) than the granites from Devsar area. Overall, they have much higher values of HP and HGU than the average value of continental crust (3.8 HGU), which imply a possible linear relationship among the surface heat flow and crustal heat generation in the rocks of MIS. Chondrite-normalized REE patterns show enriched LREE, moderate to strong negative Eu anomalies and more or less flat heavy REE. In primitive mantle-normalized multi-element variation diagrams, the granites show pronounced depletions in the high-field-strength elements (HFSE) Nb, Zr, Sr, P, and Ti. Geochemical characteristics (major, trace and REE) along with the use of various discrimination schemes revealed their probable correspondence to magma derived from the crustal origin by a different degree of partial melting.

Keywords: A-type granite, neoproterozoic, Malani igneous suite, Khanak, Devsar

Procedia PDF Downloads 252
215 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference

Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev

Abstract:

Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.

Keywords: compartmental model, climate, dengue, machine learning, social-economic

Procedia PDF Downloads 49
214 Documentary Project as an Active Learning Strategy in a Developmental Psychology Course

Authors: Ozge Gurcanli

Abstract:

Recent studies in active-learning focus on how student experience varies based on the content (e.g. STEM versus Humanities) and the medium (e.g. in-class exercises versus off-campus activities) of experiential learning. However, little is known whether the variation in classroom time and space within the same active learning context affects student experience. This study manipulated the use of classroom time for the active learning component of a developmental psychology course that is offered at a four-year university in the South-West Region of United States. The course uses a blended model: traditional and active learning. In the traditional learning component of the course, students do weekly readings, listen to lectures, and take midterms. In the active learning component, students make a documentary on a developmental topic as a final project. Students used the classroom time and space for the documentary in two ways: regular classroom time slots that were dedicated to the making of the documentary outside without the supervision of the professor (Classroom-time Outside) and lectures that offered basic instructions about how to make a documentary (Documentary Lectures). The study used the public teaching evaluations that are administered by the Office of Registrar’s. A total of two hundred and seven student evaluations were available across six semesters. Because the Office of Registrar’s presented the data separately without personal identifiers, One-Way ANOVA with four groups (Traditional, Experiential-Heavy: 19% Classroom-time Outside, 12% for Documentary Lectures, Experiential-Moderate: 5-7% for Classroom-time Outside, 16-19% for Documentary Lectures, Experiential Light: 4-7% for Classroom-time Outside, 7% for Documentary Lectures) was conducted on five key features (Organization, Quality, Assignments Contribution, Intellectual Curiosity, Teaching Effectiveness). Each measure used a five-point reverse-coded scale (1-Outstanding, 5-Poor). For all experiential conditions, the documentary counted towards 30% of the final grade. Organization (‘The instructors preparation for class was’), Quality (’Overall, I would rate the quality of this course as’) and Assignment Contribution (’The contribution of the graded work that made to the learning experience was’) did not yield any significant differences across four course types (F (3, 202)=1.72, p > .05, F(3, 200)=.32, p > .05, F(3, 203)=.43, p > .05, respectively). Intellectual Curiosity (’The instructor’s ability to stimulate intellectual curiosity was’) yielded a marginal effect (F (3, 201)=2.61, p = .053). Tukey’s HSD (p < .05) indicated that the Experiential-Heavy (M = 1.94, SD = .82) condition was significantly different than all other three conditions (M =1.57, 1.51, 1.58; SD = .68, .66, .77, respectively) showing that heavily active class-time did not elicit intellectual curiosity as much as others. Finally, Teaching Effectiveness (’Overall, I feel that the instructor’s effectiveness as a teacher was’) was significant (F (3, 198)=3.32, p <.05). Tukey’s HSD (p <.05) showed that students found the courses with moderate (M=1.49, SD=.62) to light (M=1.52, SD=.70) active class-time more effective than heavily active class-time (M=1.93, SD=.69). Overall, the findings of this study suggest that within the same active learning context, the time and the space dedicated to active learning results in different outcomes in intellectual curiosity and teaching effectiveness.

Keywords: active learning, learning outcomes, student experience, learning context

Procedia PDF Downloads 160
213 Experimental and Numerical Investigation of Fracture Behavior of Foamed Concrete Based on Three-Point Bending Test of Beams with Initial Notch

Authors: M. Kozłowski, M. Kadela

Abstract:

Foamed concrete is known for its low self-weight and excellent thermal and acoustic properties. For many years, it has been used worldwide for insulation to foundations and roof tiles, as backfill to retaining walls, sound insulation, etc. However, in the last years it has become a promising material also for structural purposes e.g. for stabilization of weak soils. Due to favorable properties of foamed concrete, many interests and studies were involved to analyze its strength, mechanical, thermal and acoustic properties. However, these studies do not cover the investigation of fracture energy which is the core factor governing the damage and fracture mechanisms. Only limited number of publications can be found in literature. The paper presents the results of experimental investigation and numerical campaign of foamed concrete based on three-point bending test of beams with initial notch. First part of the paper presents the results of a series of static loading tests performed to investigate the fracture properties of foamed concrete of varying density. Beam specimens with dimensions of 100×100×840 mm with a central notch were tested in three-point bending. Subsequently, remaining halves of the specimens with dimensions of 100×100×420 mm were tested again as un-notched beams in the same set-up with reduced distance between supports. The tests were performed in a hydraulic displacement controlled testing machine with a load capacity of 5 kN. Apart from measuring the loading and mid-span displacement, a crack mouth opening displacement (CMOD) was monitored. Based on the load – displacement curves of notched beams the values of fracture energy and tensile stress at failure were calculated. The flexural tensile strength was obtained on un-notched beams with dimensions of 100×100×420 mm. Moreover, cube specimens 150×150×150 mm were tested in compression to determine the compressive strength. Second part of the paper deals with numerical investigation of the fracture behavior of beams with initial notch presented in the first part of the paper. Extended Finite Element Method (XFEM) was used to simulate and analyze the damage and fracture process. The influence of meshing and variation of mechanical properties on results was investigated. Numerical models simulate correctly the behavior of beams observed during three-point bending. The numerical results show that XFEM can be used to simulate different fracture toughness of foamed concrete and fracture types. Using the XFEM and computer simulation technology allow for reliable approximation of load–bearing capacity and damage mechanisms of beams made of foamed concrete, which provides some foundations for realistic structural applications.

Keywords: foamed concrete, fracture energy, three-point bending, XFEM

Procedia PDF Downloads 273
212 Determinants of Never Users of Contraception-Results from Pakistan Demographic and Health Survey 2012-13

Authors: Arsalan Jabbar, Wajiha Javed, Nelofer Mehboob, Zahid Memon

Abstract:

Introduction: There are multiple social, individual and cultural factors that influence an individual’s decision to adopt family planning methods especially among non-users in patriarchal societies like Pakistan.Non-users, if targeted efficiently, can contribute significantly to country’s CPR. A research study showed that non-users if convinced to adopt lactational amenorrhea method can shift to long-term methods in future. Research shows that if non-users are targeted efficiently a 59% reduction in unintended pregnancies in Saharan Africa and South-Central and South-East Asia is anticipated. Methods: We did secondary data analysis on Pakistan Demographic Heath Survey (2012-13) dataset. Use of contraception (never-use/ever-use) was the outcome variable. At univariate level Chi-square/Fisher Exact test was used to assess relationship of baseline covariates with contraception use. Then variables to be incorporated in the model were checked for multi-collinearity, confounding, and interaction. Then binary logistic regression (with an urban-rural stratification) was done to find the relationship between contraception use and baseline demographic and social variables. Results: The multivariate analyses of the study showed that younger women (≤ 29 years) were more prone to be never users as compared to those who were > 30 years and this trend was seen in urban areas (AOR 1.92, CI 1.453-2.536) as well as rural areas (AOR 1.809, CI 1.421-2.303). While looking at regional variation, women from urban Sindh (AOR 1.548, CI 1.142-2.099) and urban Balochistan (AOR 2.403, CI 1.504-3.839) had more never users as compared to other urban regions. Women in the rich wealth quintile were more never users and this was seen both in urban and rural localities (urban (AOR 1.106 CI .753-1.624); rural areas (AOR 1.162, CI .887-1.524)) even though these were not statistically significant. Women idealizing more children(> 4) are more never users as compared to those idealizing less children in both urban (AOR 1.854, CI 1.275-2.697) and rural areas (AOR 2.101, CI 1.514-2.916). Women who never lost a pregnancy were more inclined to be non-users in rural areas (AOR 1.394, CI 1.127-1.723) .Women familiar with only traditional or no method had more never users in rural areas (AOR 1.717, CI 1.127-1.723) but in urban areas it wasn’t significant. Women unaware of Lady Health Worker’s presence in their area were more never users especially in rural areas (AOR 1.276, CI 1.014-1.607). Women who did not visit any care provider were more never users (urban (AOR 11.738, CI 9.112-15.121) rural areas (AOR 7.832, CI 6.243-9.826)). Discussion/Conclusion: This study concluded that government, policy makers and private sector family planning programs should focus on the untapped pool of never users (younger women from underserved provinces, in higher wealth quintiles, who desire more children.). We need to make sure to cover catchment areas where there are less LHWs and less providers as ignorance to modern methods and never been visited by an LHW are important determinants of never use. This all is in sync with previous literate from similar developing countries.

Keywords: contraception, demographic and health survey, family planning, never users

Procedia PDF Downloads 384
211 Effects of Hydrogen Bonding and Vinylcarbazole Derivatives on 3-Cyanovinylcarbazole Mediated Photo-Cross-Linking Induced Cytosine Deamination

Authors: Siddhant Sethi, Yasuharu Takashima, Shigetaka Nakamura, Kenzo Fujimoto

Abstract:

Site-directed mutagenesis is a renowned technique to introduce specific mutations in the genome. To achieve site-directed mutagenesis, many chemical and enzymatic approaches have been reported in the past like disulphite induced genome editing, CRISPR-Cas9, TALEN etc. The chemical methods are invasive whereas the enzymatic approaches are time-consuming and expensive. Most of these techniques are unusable in the cellular application due to their toxicity and other limitations. Photo-chemical cytosine deamination, introduced in 2010, is one of the major technique for enzyme-free single-point mutation of cytosine to uracil in DNA and RNA, wherein, 3-cyanovinylcarbazole nucleoside (CNVK) containing oligodeoxyribonucleotide (ODN) having CNVK at -1 position to that of target cytosine is reversibly crosslinked to target DNA strand using 366 nm and then incubated at 90ºC to accommodate deamination. This technique is superior to enzymatic methods of site-directed mutagenesis but has a disadvantage that it requires the use of high temperature for the deamination step which restricts its applicability in the in vivo applications. This study has been focused on improving the technique by reducing the temperature required for deamination. Firstly, the photo-cross-linker, CNVK has been modified by replacing cyano group attached to vinyl group with methyl ester (OMeVK), amide (NH2VK), and carboxylic acid (OHVK) to observe the acceleration in the deamination of target cytosine cross-linked to vinylcarbazole derivative. Among the derivatives, OHVK has shown 2 times acceleration in deamination reaction as compared to CNVK, while the other two derivatives have shown deceleration towards deamination reaction. The trend of rate of deamination reaction follows the same order as that of hydrophilicity of the vinylcarbazole derivatives. OHVK being most hydrophilic has shown highest acceleration while OMeVK is least hydrophilic has proven to be least active for deamination. Secondly, in the related study, the counter-base of the target cytosine, guanine has been replaced by inosine, 2-aminopurine, nebularine, and 5-nitroindole having distinct hydrogen bonding patterns with target cytosine. Among the ODNs with these counter bases, ODN with inosine has shown 12 fold acceleration towards deamination of cytosine cross-linked to CNVK at physiological conditions as compared to guanosine. Whereas, when 2-aminopurine, nebularine, and 5-nitroindole were used, no deamination reaction took place. It can be concluded that inosine has potential to be used as the counter base of target cytosine for the CNVK mediated photo-cross-linking induced deamination of cytosine. The increase in rate of deamination reaction has been attributed to pattern and number of hydrogen bonding between the cytosine and counter base. One of the important factor is presence of hydrogen bond between exo-cyclic amino group of cytosine and the counter base. These results will be useful for development of more efficient technique for site-directed mutagenesis for C → U transformations in the DNA/RNA which might be used in the living system for treatment of various genetic disorders and genome engineering for making designer and non-native proteins.

Keywords: C to U transformation, DNA editing, genome engineering, ultra-fast photo-cross-linking

Procedia PDF Downloads 212
210 Vitamin B9 Separation by Synergic Pertraction

Authors: Blaga Alexandra Cristina, Kloetzer Lenuta, Bompa Amalia Stela, Galaction Anca Irina, Cascaval Dan

Abstract:

Vitamin B9 is an important member of vitamins B group, being a growth factor, important for making genetic material as DNA and RNA, red blood cells, for building muscle tissues, especially during periods of infancy, adolescence and pregnancy. Its production by biosynthesis is based on the high metabolic potential of mutant Bacillus subtilis, due to a superior biodisponibility compared to that obtained by chemical pathways. Pertraction, defined as the extraction and transport through liquid membranes consists in the transfer of a solute between two aqueous phases of different pH-values, phases that are separated by a solvent layer of various sizes. The pertraction efficiency and selectivity could be significantly enhanced by adding a carrier in the liquid membrane, such as organophosphoric compounds, long chain amines or crown-ethers etc., the separation process being called facilitated pertraction. The aim of the work is to determine the impact of the presence of two extractants/carriers in the bulk liquid membrane, i.e. di(2-ethylhexyl) phosphoric acid (D2EHPA) and lauryltrialkylmetilamine (Amberlite LA2) on the transport kinetics of vitamin B9. The experiments have been carried out using two pertraction equipments for a free liquid membrane or bulk liquid membrane. One pertraction cell consists on a U-shaped glass pipe (used for the dichloromethane membrane) and the second one is an H-shaped glass pipe (used for h-heptane), having 45 mm inner diameter of the total volume of 450 mL, the volume of each compartment being of 150 mL. The aqueous solutions are independently mixed by means of double blade stirrers with 6 mm diameter and 3 mm height, having the rotation speed of 500 rpm. In order to reach high diffusional rates through the solvent layer, the organic phase has been mixed with a similar stirrer, at a similar rotation speed (500 rpm). The area of mass transfer surface, both for extraction and for reextraction, was of 1.59x10-³ m2. The study on facilitated pertraction with the mixture of two carriers, namely D2EHPA and Amberlite LA-2, dissolved in two solvents with different polarities: n-heptane and dichloromethane, indicated the possibility to obtain the synergic effect. The synergism has been analyzed by considering the vitamin initial and final mass flows, as well as the permeability factors through liquid membrane. The synergic effect has been observed at low D2EHPA concentrations and high Amberlite LA-2 concentrations, being more important for the low-polar solvent (n-heptane). The results suggest that the mechanism of synergic pertraction consists on the reaction between the organophosphoric carrier and vitamin B9 at the interface between the feed and membrane phases, while the aminic carrier enhances the hydrophobicity of this compound by solvation. However, the formation of this complex reduced the reextraction rate and, consequently, affects the synergism related to the final mass flows and permeability factor. For describing the influences of carriers concentrations on the synergistic coefficients, some equations have been proposed by taking into account the vitamin mass flows or permeability factors, with an average deviations between 4.85% and 10.73%.

Keywords: pertraction, synergism, vitamin B9, Amberlite LA-2, di(2-ethylhexyl) phosphoric acid

Procedia PDF Downloads 246
209 Untangling the Greek Seafood Market: Authentication of Crustacean Products Using DNA-Barcoding Methodologies

Authors: Z. Giagkazoglou, D. Loukovitis, C. Gubili, A. Imsiridou

Abstract:

Along with the increase in human population, demand for seafood has increased. Despite the strict labeling regulations that exist for most marketed species in the European Union, seafood substitution remains a persistent global issue. Food fraud occurs when food products are traded in a false or misleading way. Mislabeling occurs when one species is substituted and traded under the name of another, and it can be intentional or unintentional. Crustaceans are one of the most regularly consumed seafood in Greece. Shrimps, prawns, lobsters, crayfish, and crabs are considered a delicacy and can be encountered in a variety of market presentations (fresh, frozen, pre-cooked, peeled, etc.). With most of the external traits removed, products as such are susceptible to species substitution. DNA barcoding has proven to be the most accurate method for the detection of fraudulent seafood products. To our best knowledge, the DNA barcoding methodology is used for the first time in Greece, in order to investigate the labeling practices for crustacean products available in the market. A total of 100 tissue samples were collected from various retailers and markets across four Greek cities. In an effort to cover the highest range of products possible, different market presentations were targeted (fresh, frozen and cooked). Genomic DNA was extracted using the DNeasy Blood & Tissue Kit, according to the manufacturer's instructions. The mitochondrial gene selected as the target region of the analysis was the cytochrome c oxidase subunit I (COI). PCR products were purified and sequenced using an ABI 3500 Genetic Analyzer. Sequences were manually checked and edited using BioEdit software and compared against the ones available in GenBank and BOLD databases. Statistical analyses were conducted in R and PAST software. For most samples, COI amplification was successful, and species level identification was possible. The preliminary results estimate moderate mislabeling rates (25%) in the identified samples. Mislabeling was most commonly detected in fresh products, with 50% of the samples in this category labeled incorrectly. Overall, the mislabeling rates detected by our study probably relate to some degree of unintentional misidentification, and lack of knowledge surrounding the legal designations by both retailers and consumers. For some species of crustaceans (i.e. Squila mantis) the mislabeling appears to be also affected by the local labeling practices. Across Greece, S. mantis is sold in the market under two common names, but only one is recognized by the country's legislation, and therefore any mislabeling is probably not profit motivated. However, the substitution of the speckled shrimp (Metapenaus monoceros) for the distinct, giant river prawn (Macrobranchium rosenbergii), is a clear example of deliberate fraudulent substitution, aiming for profit. To our best knowledge, no scientific study investigating substitution and mislabeling rates in crustaceans has been conducted in Greece. For a better understanding of Greece's seafood market, similar DNA barcoding studies in other regions with increased touristic importance (e.g., the Greek islands) should be conducted. Regardless, the expansion of the list of species-specific designations for crustaceans in the country is advised.

Keywords: COI gene, food fraud, labelling control, molecular identification

Procedia PDF Downloads 40
208 Assessment of Cytogenetic Damage as a Function of Radiofrequency Electromagnetic Radiations Exposure Measured by Electric Field Strength: A Gender Based Study

Authors: Ramanpreet, Gursatej Gandhi

Abstract:

Background: Dependence on electromagnetic radiations involved in communication and information technologies has incredibly increased in the personal and professional world. Among the numerous radiations, sources are fixed site transmitters, mobile phone base stations, and power lines beside indoor devices like cordless phones, WiFi, Bluetooth, TV, radio, microwave ovens, etc. Rather there is the continuous emittance of radiofrequency radiations (RFR) even to those not using the devices from mobile phone base stations. The consistent and widespread usage of wireless devices has build-up electromagnetic fields everywhere. In fact, the radiofrequency electromagnetic field (RF-EMF) has insidiously become a part of the environment and like any contaminant may pose to be health-hazardous requiring assessment. Materials and Methods: In the present study, cytogenetic damage was assessed using the Buccal Micronucleus Cytome (BMCyt) assay as a function of radiation exposure after Institutional Ethics Committee clearance of the study and written voluntary informed consent from the participants. On a pre-designed questionnaire, general information lifestyle patterns (diet, physical activity, smoking, drinking, use of mobile phones, internet, Wi-Fi usage, etc.) genetic, reproductive (pedigrees) and medical histories were recorded. For this, 24 hour-personal exposimeter measurements (PEM) were recorded for unrelated 60 healthy adults (40 cases residing in the vicinity of mobile phone base stations since their installation and 20 controls residing in areas with no base stations). The personal exposimeter collects information from all the sources generating EMF (TETRA, GSM, UMTS, DECT, and WLAN) as total RF-EMF uplink and downlink. Findings: The cases (n=40; 23-90 years) and the controls (n=20; 19-65 years) matched for alcohol drinking, smoking habits, and mobile and cordless phone usage. The PEM in cases (149.28 ± 8.98 mV/m) revealed significantly higher (p=0.000) electric field strength compared to the recorded value (80.40 ± 0.30 mV/m) in controls. The GSM 900 uplink (p=0.000), GSM 1800 downlink (p=0.000),UMTS (both uplink; p=0.013 and downlink; p=0.001) and DECT (p=0.000) electric field strength were significantly elevated in the cases as compared to controls. The electric field strength in the cases was significantly from GSM1800 (52.26 ± 4.49mV/m) followed by GSM900 (45.69 ± 4.98mV/m), UMTS (25.03 ± 3.33mV/m), DECT (18.02 ± 2.14mV/m) and was least from WLAN (8.26 ± 2.35mV/m). The higher significantly (p=0.000) increased exposure to the cases was from GSM (97.96 ± 6.97mV/m) in comparison to UMTS, DECT, and WLAN. The frequencies of micronuclei (1.86X, p=0.007), nuclear buds (2.95X, p=0.002) and cell death parameter (condensed chromatin cells) were significantly (1.75X, p=0.007) elevated in cases compared to that in controls probably as a function of radiofrequency radiation exposure. Conclusion: In the absence of other exposure(s), any cytogenetic damage if unrepaired is a cause of concern as it can cause malignancy. Larger sample size with the clinical assessment will prove more insightful of such an effect.

Keywords: Buccal micronucleus cytome assay, cytogenetic damage, electric field strength, personal exposimeter

Procedia PDF Downloads 133
207 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory

Authors: Belcher Fulele, W. A. A. Ddamba

Abstract:

Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.

Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model

Procedia PDF Downloads 333
206 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 139
205 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line

Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan

Abstract:

Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.

Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA

Procedia PDF Downloads 116
204 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 398
203 Clastic Sequence Stratigraphy of Late Jurassic to Early Cretaceous Formations of Jaisalmer Basin, Rajasthan

Authors: Himanshu Kumar Gupta

Abstract:

The Jaisalmer Basin is one of the parts of the Rajasthan basin in northwestern India. The presence of five major unconformities/hiatuses of varying span i.e. at the top of Archean basement, Cambrian, Jurassic, Cretaceous, and Eocene have created the foundation for constructing a sequence stratigraphic framework. Based on basin formative tectonic events and their impact on sedimentation processes three first-order sequences have been identified in Rajasthan Basin. These are Proterozoic-Early Cambrian rift sequence, Permian to Middle-Late Eocene shelf sequence and Pleistocene - Recent sequence related to Himalayan Orogeny. The Permian to Middle Eocene I order sequence is further subdivided into three-second order sequences i.e. Permian to Late Jurassic II order sequence, Early to Late Cretaceous II order sequence and Paleocene to Middle-Late Eocene II order sequence. In this study, Late Jurassic to Early Cretaceous sequence was identified and log-based interpretation of smaller order T-R cycles have been carried out. A log profile from eastern margin to western margin (up to Shahgarh depression) has been taken. The depositional environment penetrated by the wells interpreted from log signatures gave three major facies association. The blocky and coarsening upward (funnel shape), the blocky and fining upward (bell shape) and the erratic (zig-zag) facies representing distributary mouth bar, distributary channel and marine mud facies respectively. Late Jurassic Formation (Baisakhi-Bhadasar) and Early Cretaceous Formation (Pariwar) shows a lesser number of T-R cycles in shallower and higher number of T-R cycles in deeper bathymetry. Shallowest well has 3 T-R cycles in Baisakhi-Bhadasar and 2 T-R cycles in Pariwar, whereas deeper well has 4 T-R cycles in Baisakhi-Bhadasar and 8 T-R cycles in Pariwar Formation. The Maximum Flooding surfaces observed from the stratigraphy analysis indicate major shale break (high shale content). The study area is dominated by the alternation of shale and sand lithologies, which occurs in an approximate ratio of 70:30. A seismo-geological cross section has been prepared to understand the stratigraphic thickness variation and structural disposition of the strata. The formations are quite thick to the west, the thickness of which reduces as we traverse towards the east. The folded and the faulted strata indicated the compressional tectonics followed by the extensional tectonics. Our interpretation is supported with seismic up to second order sequence indicates - Late Jurassic sequence is a Highstand Systems Tract (Baisakhi - Bhadasar formations), and the Early Cretaceous sequence is Regressive to Lowstand System Tract (Pariwar Formation).

Keywords: Jaisalmer Basin, sequence stratigraphy, system tract, T-R cycle

Procedia PDF Downloads 111
202 The Impact of Speech Style on the Production of Spanish Vowels by Spanish-English Bilinguals and Spanish Monolinguals

Authors: Vivian Franco

Abstract:

There has been a great deal of research about vowel production of second language learners of Spanish, vowel variation across Spanish dialects, and more recently, research related to Spanish heritage speakers’ vowel production based on speech style. However, there is little investigation reported on Spanish heritage speakers’ vowel production in regard to task modality by incorporating own comparison groups of monolinguals and late bilinguals. Thus, the present study investigates the influence of speech style on Spanish heritage speakers’ vowel production by comparing Spanish-English early and late bilinguals and Spanish monolinguals. The study was guided by the following research question: How do early bilinguals (heritage speakers) differ/relate to advanced L2 speakers of Spanish (late bilinguals) and Spanish monolinguals in their vowel quality (acoustic distribution) and quantity (duration) based on speech style? The participants were a total of 11 speakers of Spanish: 7 early Spanish-English bilinguals with a similar linguistic background (simultaneous bilinguals of the second generation); 2 advanced L2 speakers of Spanish; and 2 Spanish monolinguals from Mexico. The study consisted of two tasks. The first one adopted a semi-spontaneous style by a solicited narration of life experiences and a description of a favorite movie with the purpose to collect spontaneous speech. The second task was a reading activity in which the participants read two paragraphs of a Mexican literary essay 'La nuez.' This task aimed to obtain a more controlled speech style. From this study, it can be concluded that early bilinguals and monolinguals show a smaller formant vowel space overall compared to the late bilinguals in both speech styles. In terms of formant values by stress, the early bilinguals and the late bilinguals resembled in the semi-spontaneous speech style as their unstressed vowel space overlapped with that of the unstressed vowels different from the monolinguals who displayed a slightly reduced unstressed vowel space. For the controlled data, the early bilinguals were similar to the monolinguals as their stressed and unstressed vowel spaces overlapped in comparison to the late bilinguals who showed a more clear reduction of unstressed vowel space. In regard to stress, the monolinguals revealed longer vowel duration overall. However, findings of duration by stress showed that the early bilinguals and the monolinguals remained stable with shorter values of unstressed vowels in the semi-spontaneous data and longer duration in the controlled data when compared to the late bilinguals who displayed opposite results. These findings suggest an implication for Spanish heritage speakers and L2 Spanish vowels research as it has been frequently argued that Spanish bilinguals differ from the Spanish monolinguals by their vowel reduction and centralized vowel space influenced by English. However, some Spanish varieties are characterized by vowel reduction especially in certain phonetic contexts so that some vowels present more weakening than others. Consequently, it would not be conclusive to affirm an English influence on the Spanish of these bilinguals.

Keywords: Spanish-English bilinguals, Spanish monolinguals, spontaneous and controlled speech, vowel production.

Procedia PDF Downloads 107
201 Identification and Characterization of Small Peptides Encoded by Small Open Reading Frames using Mass Spectrometry and Bioinformatics

Authors: Su Mon Saw, Joe Rothnagel

Abstract:

Short open reading frames (sORFs) located in 5’UTR of mRNAs are known as uORFs. Characterization of uORF-encoded peptides (uPEPs) i.e., a subset of short open reading frame encoded peptides (sPEPs) and their translation regulation lead to understanding of causes of genetic disease, proteome complexity and development of treatments. Existence of uORFs within cellular proteome could be detected by LC-MS/MS. The ability of uORF to be translated into uPEP and achievement of uPEP identification will allow uPEP’s characterization, structures, functions, subcellular localization, evolutionary maintenance (conservation in human and other species) and abundance in cells. It is hypothesized that a subset of sORFs are translatable and that their encoded sPEPs are functional and are endogenously expressed contributing to the eukaryotic cellular proteome complexity. This project aimed to investigate whether sORFs encode functional peptides. Liquid chromatography-mass spectrometry (LC-MS) and bioinformatics were thus employed. Due to probable low abundance of sPEPs and small in sizes, the need for efficient peptide enrichment strategies for enriching small proteins and depleting the sub-proteome of large and abundant proteins is crucial for identifying sPEPs. Low molecular weight proteins were extracted using SDS-PAGE from Human Embryonic Kidney (HEK293) cells and Strong Cation Exchange Chromatography (SCX) from secreted HEK293 cells. Extracted proteins were digested by trypsin to peptides, which were detected by LC-MS/MS. The MS/MS data obtained was searched against Swiss-Prot using MASCOT version 2.4 to filter out known proteins, and all unmatched spectra were re-searched against human RefSeq database. ProteinPilot v5.0.1 was used to identify sPEPs by searching against human RefSeq, Vanderperre and Human Alternative Open Reading Frame (HaltORF) databases. Potential sPEPs were analyzed by bioinformatics. Since SDS PAGE electrophoresis could not separate proteins <20kDa, this could not identify sPEPs. All MASCOT-identified peptide fragments were parts of main open reading frame (mORF) by ORF Finder search and blastp search. No sPEP was detected and existence of sPEPs could not be identified in this study. 13 translated sORFs in HEK293 cells by mass spectrometry in previous studies were characterized by bioinformatics. Identified sPEPs from previous studies were <100 amino acids and <15 kDa. Bioinformatics results showed that sORFs are translated to sPEPs and contribute to proteome complexity. uPEP translated from uORF of SLC35A4 was strongly conserved in human and mouse while uPEP translated from uORF of MKKS was strongly conserved in human and Rhesus monkey. Cross-species conserved uORFs in association with protein translation strongly suggest evolutionary maintenance of coding sequence and indicate probable functional expression of peptides encoded within these uORFs. Translation of sORFs was confirmed by mass spectrometry and sPEPs were characterized with bioinformatics.

Keywords: bioinformatics, HEK293 cells, liquid chromatography-mass spectrometry, ProteinPilot, Strong Cation Exchange Chromatography, SDS-PAGE, sPEPs

Procedia PDF Downloads 164
200 Educational Infrastructure a Barrier for Teaching and Learning Architecture

Authors: Alejandra Torres-Landa López

Abstract:

Introduction: Can architecture students be creative in spaces conformed by an educational infrastructure build with paradigms of the past?, this question and others related are answered in this paper as it presents the PhD research: An anthropic conflict in Mexican Higher Education Institutes, problems and challenges of the educational infrastructure in teaching and learning History of Architecture. This research was finished in 2013 and is one of the first studies conducted nationwide in Mexico that analysis the educational infrastructure impact in learning architecture; its objective was to identify which elements of the educational infrastructure of Mexican Higher Education Institutes where architects are formed, hinder or contribute to the teaching and learning of History of Architecture; how and why it happens. The methodology: A mixed methodology was used combining quantitative and qualitative analysis. Different resources and strategies for data collection were used, such as questionnaires for students and teachers, interviews to architecture research experts, direct observations in Architecture classes, among others; the data collected was analyses using SPSS and MAXQDA. The veracity of the quantitative data was supported by the Cronbach’s Alpha Coefficient, obtaining a 0.86, figure that gives the data enough support. All the above enabled to certify the anthropic conflict in which Mexican Universities are. Major findings of the study: Although some of findings were probably not unknown, they haven’t been systematized and analyzed with the depth to which it’s done in this research. So, it can be said, that the educational infrastructure of most of the Higher Education Institutes studied, is a barrier to the educational process, some of the reasons are: the little morphological variation of space, the inadequate control of lighting, noise, temperature, equipment and furniture, the poor or none accessibility for disable people; as well as the absence, obsolescence and / or insufficiency of information technologies are some of the issues that generate an anthropic conflict understanding it as the trouble that teachers and students have to relate between them, in order to achieve significant learning). It is clear that most of the educational infrastructure of Mexican Higher Education Institutes is anchored to paradigms of the past; it seems that they respond to the previous era of industrialization. The results confirm that the educational infrastructure of Mexican Higher Education Institutes where architects are formed, is perceived as a "closed container" of people and data; infrastructure that becomes a barrier to teaching and learning process. Conclusion: The research results show it's time to change the paradigm in which we conceive the educational infrastructure, it’s time to stop seen it just only as classrooms, workshops, laboratories and libraries, as it must be seen from a constructive, urban, architectural and human point of view, taking into account their different dimensions: physical, technological, documental, social, among others; so the educational infrastructure can become a set of elements that organize and create spaces where ideas and thoughts can be shared; to be a social catalyst where people can interact between each other and with the space itself.

Keywords: educational infrastructure, impact of space in learning architecture outcomes, learning environments, teaching architecture, learning architecture

Procedia PDF Downloads 382
199 Life-Cycle Assessment of Residential Buildings: Addressing the Influence of Commuting

Authors: J. Bastos, P. Marques, S. Batterman, F. Freire

Abstract:

Due to demands of a growing urban population, it is crucial to manage urban development and its associated environmental impacts. While most of the environmental analyses have addressed buildings and transportation separately, both the design and location of a building affect environmental performance and focusing on one or the other can shift impacts and overlook improvement opportunities for more sustainable urban development. Recently, several life-cycle (LC) studies of residential buildings have integrated user transportation, focusing exclusively on primary energy demand and/or greenhouse gas emissions. Additionally, most papers considered only private transportation (mainly car). Although it is likely to have the largest share both in terms of use and associated impacts, exploring the variability associated with mode choice is relevant for comprehensive assessments and, eventually, for supporting decision-makers. This paper presents a life-cycle assessment (LCA) of a residential building in Lisbon (Portugal), addressing building construction, use and user transportation (commuting with private and public transportation). Five environmental indicators or categories are considered: (i) non-renewable primary energy (NRE), (ii) greenhouse gas intensity (GHG), (iii) eutrophication (EUT), (iv) acidification (ACID), and (v) ozone layer depletion (OLD). In a first stage, the analysis addresses the overall life-cycle considering the statistical model mix for commuting in the residence location. Then, a comparative analysis compares different available transportation modes to address the influence mode choice variability has on the results. The results highlight the large contribution of transportation to the overall LC results in all categories. NRE and GHG show strong correlation, as the three LC phases contribute with similar shares to both of them: building construction accounts for 6-9%, building use for 44-45%, and user transportation for 48% of the overall results. However, for other impact categories there is a large variation in the relative contribution of each phase. Transport is the most significant phase in OLD (60%); however, in EUT and ACID building use has the largest contribution to the overall LC (55% and 64%, respectively). In these categories, transportation accounts for 31-38%. A comparative analysis was also performed for four alternative transport modes for the household commuting: car, bus, motorcycle, and company/school collective transport. The car has the largest results in all impact categories. When compared to the overall LC with commuting by car, mode choice accounts for a variability of about 35% in NRE, GHG and OLD (the categories where transportation accounted for the largest share of the LC), 24% in EUT and 16% in ACID. NRE and GHG show a strong correlation because all modes have internal combustion engines. The second largest results for NRE, GHG and OLD are associated with commuting by motorcycle; however, for ACID and EUT this mode has better performance than bus and company/school transport. No single transportation mode performed best in all impact categories. Integrated assessments of buildings are needed to avoid shifts of impacts between life-cycle phases and environmental categories, and ultimately to support decision-makers.

Keywords: environmental impacts, LCA, Lisbon, transport

Procedia PDF Downloads 334
198 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars

Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez

Abstract:

In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.

Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling

Procedia PDF Downloads 129
197 Acquisition of Murcian Lexicon and Morphology by L2 Spanish Immigrants: The Role of Social Networks

Authors: Andrea Hernandez Hurtado

Abstract:

Research on social networks (SNs) -- the interactions individuals share with others has shed important light in helping to explain differential use of variable linguistic forms, both in L1s and L2s. Nevertheless, the acquisition of nonstandard L2 Spanish in the Region of Murcia, Spain, and how learners interact with other speakers while sojourning there have received little attention. Murcian Spanish (MuSp) was widely influenced by Panocho, a divergent evolution of Hispanic Latin, and differs from the more standard Peninsular Spanish (StSp) in phonology, morphology, and lexicon. For instance, speakers from this area will most likely palatalize diminutive endings, producing animalico [̩a.ni.ma.ˈli.ko] instead of animalito [̩a.ni.ma.ˈli.to] ‘little animal’. Because L1 speakers of the area produce and prefer salient regional lexicon and morphology (particularly the palatalized diminutive -ico) in their speech, the current research focuses on how international residents in the Region of Murcia use Spanish: (1) whether or not they acquire (perceptively and/or productively) any of the salient regional features of MuSp, and (2) how their SNs explain such acquisition. This study triangulates across three tasks -recognition, production, and preference- addressing both lexicon and morphology, with each task specifically created for the investigation of MuSp features. Among other variables, the effects of L1, residence, and identity are considered. As an ongoing dissertation research, data are currently being gathered through an online questionnaire. So far, 7 participants from multiple nationalities have completed the survey, although a minimum of 25 are expected to be included in the coming months. Preliminary results revealed that MuSp lexicon and morphology were successfully recognized by participants (p<.001). In terms of regional lexicon production (10.0%) and preference (47.5%), although participants showed higher percentages of StSp, results showed that international residents become aware of stigmatized lexicon and may incorporate it into their language use. Similarly, palatalized diminutives (production 14.2%, preference 19.0%) were present in their responses. The Social Network Analysis provided information about participants’ relationships with their interactants, as well as among them. Results indicated that, generally, when residents were more immersed in the culture (i.e., had more Murcian alters) they produced and preferred more regional features. This project contributes to the knowledge of language variation acquisition in L2 speakers, focusing on a stigmatized Spanish dialect and exploring how stigmatized varieties may affect L2 development. Results will show how L2 Spanish speakers’ language is affected by their stay in Murcia. This, in turn, will shed light on the role of SNs in language acquisition, the acquisition of understudied and marginalized varieties, and the role of immersion on language acquisition. As the first systematic account on the acquisition of L2 Spanish lexicon and morphology in the Region of Murcia, it lays important groundwork for further research on the connection between SNs and the acquisition of regional variants, applicable to Murcia and beyond.

Keywords: international residents, L2 Spanish, lexicon, morphology, nonstandard language acquisition, social networks

Procedia PDF Downloads 47
196 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf

Authors: Abderrazak Bannari, Ghadeer Kadhem

Abstract:

The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.

Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water

Procedia PDF Downloads 360
195 Against the Philosophical-Scientific Racial Project of Biologizing Race

Authors: Anthony F. Peressini

Abstract:

The concept of race has recently come prominently back into discussion in the context of medicine and medical science, along with renewed effort to biologize racial concepts. This paper argues that this renewed effort to biologize race by way of medicine and population genetics fail on their own terms, and more importantly, that the philosophical project of biologizing race ought to be recognized for what it is—a retrograde racial project—and abandoned. There is clear agreement that standard racial categories and concepts cannot be grounded in the old way of racial naturalism, which understand race as a real, interest-independent biological/metaphysical category in which its members share “physical, moral, intellectual, and cultural characteristics.” But equally clear is the very real and pervasive presence of racial concepts in individual and collective consciousness and behavior, and so it remains a pressing area in which to seek deeper understanding. Recent philosophical work has endeavored to reconcile these two observations by developing a “thin” conception of race, grounded in scientific concepts but without the moral and metaphysical content. Such “thin,” science-based analyses take the “commonsense” or “folk” sense of race as it functions in contemporary society as the starting point for their philosophic-scientific projects to biologize racial concepts. A “philosophic-scientific analysis” is a special case of the cornerstone of analytic philosophy: a conceptual analysis. That is, a rendering of a concept into the more perspicuous concepts that constitute it. Thus a philosophic-scientific account of a concept is an attempt to work out an analysis of a concept that makes use of empirical science's insights to ground, legitimate and explicate the target concept in terms of clearer concepts informed by empirical results. The focus in this paper is on three recent philosophic-scientific cases for retaining “race” that all share this general analytic schema, but that make use of “medical necessity,” population genetics, and human genetic clustering, respectively. After arguing that each of these three approaches suffers from internal difficulties, the paper considers the general analytic schema employed by such biologizations of race. While such endeavors are inevitably prefaced with the disclaimer that the theory to follow is non-essentialist and non-racialist, the case will be made that such efforts are not neutral scientific or philosophical projects but rather are what sociologists call a racial project, that is, one of many competing efforts that conjoin a representation of what race means to specific efforts to determine social and institutional arrangements of power, resources, authority, etc. Accordingly, philosophic-scientific biologizations of race, since they begin from and condition their analyses on “folk” conceptions, cannot pretend to be “prior to” other disciplinary insights, nor to transcend the social-political dynamics involved in formulating theories of race. As a result, such traditional philosophical efforts can be seen to be disciplinarily parochial and to address only a caricature of a large and important human problem—and thereby further contributing to the unfortunate isolation of philosophical thinking about race from other disciplines.

Keywords: population genetics, ontology of race, race-based medicine, racial formation theory, racial projects, racism, social construction

Procedia PDF Downloads 238
194 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 294
193 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 235