Search results for: Ana M. Guzman
13 Gas-Phase Nondestructive and Environmentally Friendly Covalent Functionalization of Graphene Oxide Paper with Amines
Authors: Natalia Alzate-Carvajal, Diego A. Acevedo-Guzman, Victor Meza-Laguna, Mario H. Farias, Luis A. Perez-Rey, Edgar Abarca-Morales, Victor A. Garcia-Ramirez, Vladimir A. Basiuk, Elena V. Basiuk
Abstract:
Direct covalent functionalization of prefabricated free-standing graphene oxide paper (GOP) is considered as the only approach suitable for systematic tuning of thermal, mechanical and electronic characteristics of this important class of carbon nanomaterials. At the same time, the traditional liquid-phase functionalization protocols can compromise physical integrity of the paper-like material up to its total disintegration. To avoid such undesirable effects, we explored the possibility of employing an alternative, solvent-free strategy for facile and nondestructive functionalization of GOP with two representative aliphatic amines, 1-octadecylamine (ODA) and 1,12-diaminododecane (DAD), as well as with two aromatic amines, 1-aminopyrene (AP) and 1,5-diaminonaphthalene (DAN). The functionalization was performed under moderate heating at 150-180 °C in vacuum. Under such conditions, it proceeds through both amidation and epoxy ring opening reactions. Comparative characterization of pristine and amine-functionalized GOP mats was carried out by using Fourier-transform infrared, Raman, and X-ray photoelectron spectroscopy (XPS), thermogravimetric (TGA) and differential thermal analysis, scanning electron and atomic force microscopy (SEM and AFM, respectively). Besides that, we compared the stability in water, wettability, electrical conductivity and elastic (Young's) modulus of GOP mats before and after amine functionalization. The highest content of organic species was obtained in the case of GOP-ODA, followed by GOP-DAD, GOP-AP and GOP-DAN samples. The covalent functionalization increased mechanical and thermal stability of GOP, as well as its electrical conductivity. The magnitude of each effect depends on the particular chemical structure of amine employed, which allows for tuning a given GOP property. Morphological characterization by using SEM showed that, compared to pristine graphene oxide paper, amine-modified GOP mats become relatively ordered layered assemblies, in which individual GO sheets are organized in a near-parallel pattern. Financial support from the National Autonomous University of Mexico (grants DGAPA-IN101118 and IN200516) and from the National Council of Science and Technology of Mexico (CONACYT, grant 250655) is greatly appreciated. The authors also thank David A. Domínguez (CNyN of UNAM) for XPS measurements and Dr. Edgar Alvarez-Zauco (Faculty of Science of UNAM) for the opportunity to use TGA equipment.Keywords: amines, covalent functionalization, gas-phase, graphene oxide paper
Procedia PDF Downloads 18112 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 20311 Molecular Detection of mRNA bcr-abl and Circulating Leukemic Stem Cells CD34+ in Patients with Acute Lymphoblastic Leukemia and Chronic Myeloid Leukemia and Its Association with Clinical Parameters
Authors: B. Gonzalez-Yebra, H. Barajas, P. Palomares, M. Hernandez, O. Torres, M. Ayala, A. L. González, G. Vazquez-Ortiz, M. L. Guzman
Abstract:
Leukemia arises by molecular alterations of the normal hematopoietic stem cell (HSC) transforming it into a leukemic stem cell (LSC) with high cell proliferation, self-renewal, and cell differentiation. Chronic myeloid leukemia (CML) originates from an LSC-leading to elevated proliferation of myeloid cells and acute lymphoblastic leukemia (ALL) originates from an LSC development leading to elevated proliferation of lymphoid cells. In both cases, LSC can be identified by multicolor flow cytometry using several antibodies. However, to date, LSC levels in peripheral blood (PB) are not established well enough in ALL and CML patients. On the other hand, the detection of the minimal residue disease (MRD) in leukemia is mainly based on the identification of the mRNA bcr-abl gene in CML patients and some other genes in ALL patients. There is no a properly biomarker to detect MDR in both types of leukemia. The objective of this study was to determine mRNA bcr-abl and the percentage of LSC in peripheral blood of patients with CML and ALL and identify a possible association between the amount of LSC in PB and clinical data. We included in this study 19 patients with Leukemia. A PB sample was collected per patient and leukocytes were obtained by Ficoll gradient. The immunophenotype for LSC CD34+ was done by flow cytometry analysis with CD33, CD2, CD14, CD16, CD64, HLA-DR, CD13, CD15, CD19, CD10, CD20, CD34, CD38, CD71, CD90, CD117, CD123 monoclonal antibodies. In addition, to identify the presence of the mRNA bcr-abl by RT-PCR, the RNA was isolated using TRIZOL reagent. Molecular (presence of mRNA bcr-abl and LSC CD34+) and clinical results were analyzed with descriptive statistics and a multiple regression analysis was performed to determine statistically significant association. In total, 19 patients (8 patients with ALL and 11 patients with CML) were analyzed, 9 patients with de novo leukemia (ALL = 6 and CML = 3) and 10 under treatment (ALL = 5 and CML = 5). The overall frequency of mRNA bcr-abl was 31% (6/19), and it was negative in ALL patients and positive in 80% in CML patients. On the other hand, LSC was determined in 16/19 leukemia patients (%LSC= 0.02-17.3). The Novo patients had higher percentage of LSC (0.26 to 17.3%) than patients under treatment (0 to 5.93%). The amount of LSC was significantly associated with the amount of LSC were: absence of treatment, the absence of splenomegaly, and a lower number of leukocytes, negative association for the clinical variables age, sex, blasts, and mRNA bcr-abl. In conclusion, patients with de novo leukemia had a higher percentage of circulating LSC than patients under treatment, and it was associated with clinical parameters as lack of treatment, absence of splenomegaly and a lower number of leukocytes. The mRNA bcr-abl detection was only possible in the series of patients with CML, and molecular detection of LSC could be identified in the peripheral blood of all leukemia patients, we believe the identification of circulating LSC may be used as biomarker for the detection of the MRD in leukemia patients.Keywords: stem cells, leukemia, biomarkers, flow cytometry
Procedia PDF Downloads 35610 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 1649 Risk and Coping: Understanding Community Responses to Calls for Disaster Evacuation in Central Philippines
Authors: Soledad Natalia M. Dalisay, Mylene De Guzman
Abstract:
In archipelagic countries like the Philippines, many communities thrive along coastal areas. The sea is the community members’ main source of livelihood and the site of many cultural activities. For these communities, the sea is their life and livelihood. Nevertheless, the sea also poses a hazard during the rainy season when typhoons frequent their communities. Coastal communities often encounter threats from storm surges and flooding that are common when there are typhoons. During such periods, disaster evacuation programs are implemented. However, in many instances, evacuation has been the bane of local government officials implementing such programs in their communities as resistance from community members is often encountered. Such resistance is often viewed by program implementers as due to the fact that people were hard headed and ignorant of the potential impacts of living in hazard prone areas. This paper argues that it is not for these reasons that people refused to evacuate. Drawing from data collected from fieldwork done in three sites in Central Philippines affected by super typhoon Haiyan, this study aimed to provide a contextualized understanding of peoples’ refusal to heed disaster evacuation warnings. This study utilized the multi-sited ethnography approach with in-depth episodic interviews, focus group discussions, participatory risk mapping and key informant interviews in gathering data on peoples’ experiences and insights specifically on evacuation during typhoon Haiyan. This study showed that people have priorities and considerations vital in their social lives that they are protecting in their refusal to leave their homes for pre-emptive evacuation. It is not that they are not aware of the risks when the face the hazard. It is more that they had faith in the local knowledge and strategies that they have developed since the time of their ancestors as a result of living and engaging with hazards in their areas for as long as they could remember. The study also revealed that risk in encounters with hazards was gendered. Furthermore, previous engagement with local government officials and the manner in which the pre-emptive evacuation programs were implemented had cast doubt on the value of such programs in saving their lives. Life in the designated evacuation areas can be as dangerous if not more compared with living in their coastal homes. There seems to be the impression that in the evacuation program of the government, people were being moved from hazard zones to death zones. Thus, this paper ends with several recommendations that may contribute to building more responsive evacuation programs that aim to build people’s resilience while taking into consideration the local moral world in communities in identified hazard zones.Keywords: coastal communities, disaster evacuation, disaster risk perception, social and cultural responses to hazards
Procedia PDF Downloads 3378 Dietary Flaxseed Decreases Central Blood Pressure and the Concentrations of Plasma Oxylipins Associated with Hypertension in Patients with Peripheral Arterial Disease
Authors: Stephanie PB Caligiuri, Harold M Aukema, Delfin Rodriguez-Leyva, Amir Ravandi, Randy Guzman, Grant N. Pierce
Abstract:
Background: Hypertension leads to cardiac and cerebral events and therefore is the leading risk factor attributed to death in the world. Oxylipins may be mediators in these events as they can regulate vascular tone and inflammation. Oxylipins are derived from fatty acids. Dietary flaxseed is rich in the n3 fatty acid, alpha-linolenic acid, and, therefore, may have the ability to change the substrate profile of oxylipins. As a result, this could alter blood pressure. Methods: A randomized, double-blinded, controlled clinical trial, the Flax-PAD trial, was used to assess the impact of dietary flaxseed on blood pressure (BP), and to also assess the relationship of plasma oxylipins to BP in 81 patients with peripheral arterial disease (PAD). Patients with PAD were chosen for the clinical trial as they are at an increased risk for hypertension and cardiac and cerebral events. Thirty grams of ground flaxseed were added to food products to consume on a daily basis for 6 months. The control food products contained wheat germ, wheat bran, and mixed dietary oils instead of flaxseed. Central BP, which is more significantly associated to organ damage, cardiac, and cerebral events versus brachial BP, was measured by pulse wave analysis at baseline and 6 months. A plasma profile of 43 oxylipins was generated using solid phase extraction, HPLC-MS/MS, and stable isotope dilution quantitation. Results: At baseline, the central BP (systolic/diastolic) in the placebo and flaxseed group were, 131/73 ± 2.5/1.4 mmHg and 128/71 ± 2.6/1.4 mmHg, respectively. After 6 months of intervention, the flaxseed group exhibited a decrease in blood pressure of 4.0/1.0 mmHg. The 6 month central BP in the placebo and flaxseed groups were, 132/74 ± 2.9/1.8 mmHg and 124/70 ± 2.6/1.6 mmHg (P<0.05). Correlation and logistic regression analyses between central blood pressure and oxylipins were performed. Significant associations were observed between central blood pressure and 17 oxylipins, primarily produced from arachidonic acid. Every 1 nM increase in 16-hydroxyeicosatetraenoic acid (HETE) increased the odds of having high central systolic BP by 15-fold, of having high central diastolic BP by 6-fold and of having high central mean arterial pressure by 15-fold. In addition, every 1 nM increase in 5,6-dihydroxyeicosatrienoic acid (DHET) and 11,12-DHET increased the odds of having high central mean arterial pressure by 45- and 18-fold, respectively. Flaxseed induced a significant decrease in these as well as 4 other vasoconstrictive oxylipins. Conclusion: Dietary flaxseed significantly lowered blood pressure in patients with PAD and hypertension. Plasma oxylipins were strongly associated with central blood pressure and may have mediated the flaxseed-induced decrease in blood pressure.Keywords: hypertension, flaxseed, oxylipins, peripheral arterial disease
Procedia PDF Downloads 4687 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic
Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez
Abstract:
The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis
Procedia PDF Downloads 3416 Microplastic Concentrations in Cultured Oyster in Two Bays of Baja California, Mexico
Authors: Eduardo Antonio Lozano Hernandez, Nancy Ramirez Alvarez, Lorena Margarita Rios Mendoza, Jose Vinicio Macias Zamora, Felix Augusto Hernandez Guzman, Jose Luis Sanchez Osorio
Abstract:
Microplastics (MPs) are one of the most numerous reported wastes found in the marine ecosystem, representing one of the greatest risks for organisms that inhabit that environment due to their bioavailability. Such is the case of bivalve mollusks, since they are capable of filtering large volumes of water, which increases the risk of contamination by microplastics through the continuous exposure to these materials. This study aims to determine, quantify and characterize microplastics found in the cultured oyster Crassostrea gigas. We also analyzed if there are spatio-temporal differences in the microplastic concentration of organisms grown in two bays having quite different human population. In addition, we wanted to have an idea of the possible impact on humans via consumption of these organisms. Commercial size organisms (>6cm length; n = 15) were collected by triplicate from eight oyster farming sites in Baja California, Mexico during winter and summer. Two sites are located in Todos Santos Bay (TSB), while the other six are located in San Quintin Bay (SQB). Site selection was based on commercial concessions for oyster farming in each bay. The organisms were chemically digested with 30% KOH (w/v) and 30% H₂O₂ (v/v) to remove the organic matter and subsequently filtered using a GF/D filter. All particles considered as possible MPs were quantified according to their physical characteristics using a stereoscopic microscope. The type of synthetic polymer was determined using a FTIR-ATR microscope and using a user as well as a commercial reference library (Nicolet iN10 Thermo Scientific, Inc.) of IR spectra of plastic polymers (with a certainty ≥70% for polymers pure; ≥50% for composite polymers). Plastic microfibers were found in all the samples analyzed. However, a low incidence of MP fragments was observed in our study (approximately 9%). The synthetic polymers identified were mainly polyester and polyacrylonitrile. In addition, polyethylene, polypropylene, polystyrene, nylon, and T. elastomer. On average, the content of microplastics in organisms were higher in TSB (0.05 ± 0.01 plastic particles (pp)/g of wet weight) than found in SQB (0.02 ± 0.004 pp/g of wet weight) in the winter period. The highest concentration of MPs found in TSB coincides with the rainy season in the region, which increases the runoff from streams and wastewater discharges to the bay, as well as the larger population pressure (> 500,000 inhabitants). Otherwise, SQB is a mainly rural location, where surface runoff from streams is minimal and in addition, does not have a wastewater discharge into the bay. During the summer, no significant differences (Manne-Whitney U test; P=0.484) were observed in the concentration of MPs found in the cultured oysters of TSB and SQB, (average: 0.01 ± 0.003 pp/g and 0.01 ± 0.002 pp/g, respectively). Finally, we concluded that the consumption of oyster does not represent a risk for humans due to the low concentrations of MPs found. The concentration of MPs is influenced by the variables such as temporality, circulations dynamics of the bay and existing demographic pressure.Keywords: FTIR-ATR, Human risk, Microplastic, Oyster
Procedia PDF Downloads 1745 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 2924 Learning Trajectories of Mexican Language Teachers: A Cross-Cultural Comparative Study
Authors: Alberto Mora-Vazquez, Nelly Paulina Trejo Guzmán
Abstract:
This study examines the learning trajectories of twelve language teachers who were former students of a BA in applied linguistics at a Mexican state university. In particular, the study compares the social, academic and professional trajectories of two groups of teachers, six locally raised and educated ones and six repatriated ones from the U.S. Our interest in undertaking this research lies in the wide variety of students’ backgrounds we as professors in the BA program have witnessed throughout the years it has been around. Ever since the academic program started back in 2006, the student population has been made up of students whose backgrounds are highly diverse in terms of English language proficiency level, professional orientations and degree of cross-cultural awareness. Such diversity is further evidenced by the ongoing incorporation of some transnational students who have lived and studied in the United States for a significant period of time before their enrolment in the BA program. This, however, is not an isolated event as other researchers have reported this phenomenon in other TESOL-related programs of Mexican universities in the literature. Therefore, this suggests that their social and educational experiences are quite different from those of their Mexican born and educated counterparts. In addition, an informal comparison of the participation in formal teaching activities of the two groups at the beginning of their careers also suggested that significant differences in teacher training and development needs could also be identified. This issue raised questions about the need to examine the life and learning trajectories of these two groups of student teachers so as to develop an intervention plan aimed at supporting and encouraging their academic and professional advancement based on their particular needs. To achieve this goal, the study makes use of a combination of retrospective life-history research and the analysis of academic documents. The first approach uses interviews for data-collection. Through the use of a narrative life-history interview protocol, teachers were asked about their childhood home context, their language learning and teaching experiences, their stories of studying applied linguistics, and self-description. For the analysis of participants’ educational outcomes, a wide range of academic records, including reports of language proficiency exams results and language teacher training certificates, were used. The analysis revealed marked differences between the two groups of teachers in terms of academic and professional orientations. The locally educated teachers tended to graduate first, to look for further educational opportunities after graduation, to enter the language teaching profession earlier, and to expand their professional development options more than their peers. It is argued that these differences can be explained by their identities, which are made up of the interplay of influences such as their home context, their previous educational experiences and their cultural background. Implications for language teacher trainers and applied linguistics academic program administrators are provided.Keywords: beginning language teachers, life-history research, Mexican context, transnational students
Procedia PDF Downloads 4193 Implementation of Real-World Learning Experiences in Teaching Courses of Medical Microbiology and Dietetics for Health Science Students
Authors: Miriam I. Jimenez-Perez, Mariana C. Orellana-Haro, Carolina Guzman-Brambila
Abstract:
As part of microbiology and dietetics courses, students of medicine and nutrition analyze the main pathogenic microorganisms and perform dietary analyzes. The course of microbiology describes in a general way the main pathogens including bacteria, viruses, fungi, and parasites, as well as their interaction with the human species. We hypothesize that lack of practical application of the course causes the students not to find the value and the clinical application of it when in reality it is a matter of great importance for healthcare in our country. The courses of the medical microbiology and dietetics are mostly theoretical and only a few hours of laboratory practices. Therefore, it is necessary the incorporation of new innovative techniques that involve more practices and community fieldwork, real cases analysis and real-life situations. The purpose of this intervention was to incorporate real-world learning experiences in the instruction of medical microbiology and dietetics courses, in order to improve the learning process, understanding and the application in the field. During a period of 6 months, medicine and nutrition students worked in a community of urban poverty. We worked with 90 children between 4 and 6 years of age from low-income families with no access to medical services, to give an infectious diagnosis related to nutritional status in these children. We expect that this intervention would give a different kind of context to medical microbiology and dietetics students improving their learning process, applying their knowledge and laboratory practices to help a needed community. First, students learned basic skills in microbiology diagnosis test during laboratory sessions. Once, students acquired abilities to make biochemical probes and handle biological samples, they went to the community and took stool samples from children (with the corresponding informed consent). Students processed the samples in the laboratory, searching for enteropathogenic microorganism with RapID™ ONE system (Thermo Scientific™) and parasites using Willis and Malloy modified technique. Finally, they compared the results with the nutritional status of the children, previously measured by anthropometric indicators. The anthropometric results were interpreted by the OMS Anthro software (WHO, 2011). The microbiological result was interpreted by ERIC® Electronic RapID™ Code Compendium software and validated by a physician. The results were analyses of infectious outcomes and nutritional status. Related to fieldwork community learning experiences, our students improved their knowledge in microbiology and were capable of applying this knowledge in a real-life situation. They found this kind of learning useful when they translate theory to a real-life situation. For most of our students, this is their first contact as health caregivers with real population, and this contact is very important to help them understand the reality of many people in Mexico. In conclusion, real-world or fieldwork learning experiences empower our students to have a real and better understanding of how they can apply their knowledge in microbiology and dietetics and help a much- needed population, this is the kind of reality that many people live in our country.Keywords: real-world learning experiences, medical microbiology, dietetics, nutritional status, infectious status.
Procedia PDF Downloads 1322 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 1791 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases
Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo
Abstract:
The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis
Procedia PDF Downloads 222