Search results for: Eduardo Garcia Agustin
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 416

Search results for: Eduardo Garcia Agustin

116 A Tale of Seven Districts: Reviewing The Past, Present and Future of Patent Litigation Filings to Form a Two-Step Burden-Shifting Framework for 28 U.S.C. § 1404(a)

Authors: Timothy T. Hsieh

Abstract:

Current patent venue transfer laws under 28 U.S.C. § 1404(a) e.g., the Gilbert factors from Gulf Oil Corp. v. Gilbert, 330 U.S. 501 (1947) are too malleable in that they often lead to frequent mandamus orders from the U.S. Court of Appeals for the Federal Circuit (“Federal Circuit”) overturning district court rulings on venue transfer motions. Thus, this paper proposes a more robust two-step burden-shifting framework that replaces the eight Gilbert factors. Moreover, a brief history of venue transfer patterns in the seven most active federal patent district courts is covered, with special focus devoted to the venue transfer orders from Judge Alan D Albright of the U.S. District Court for the Western District of Texas. A comprehensive data summary of 45 case sets where the Federal Circuit ruled on writs of mandamus involving Judge Albright’s transfer orders is subsequently provided, with coverage summaries of certain cases including four precedential ones from the Federal Circuit. This proposed two-step burden shifting framework is then applied to these venue transfer cases, as well as Federal Circuit mandamus orders ruling on those decisions. Finally, alternative approaches to remedying the frequent reversals for venue transfer will be discussed, including potential legislative solutions, adjustments to common law framework approaches to venue transfer, deference to the inherent powers of Article III U.S. District Judge, and a unified federal patent district court. Overall, this paper seeks to offer a more robust and consistent three-step burden-shifting framework for venue transfer and for the Federal Circuit to follow in administering mandamus orders, which might change somewhat in light of Western District of Texas Chief Judge Orlando Garcia’s order on redistributing Judge Albright’s patent cases.

Keywords: Patent law, venue, judge Alan Albright, minimum contacts, western district of Texas

Procedia PDF Downloads 73
115 Patients' Out-Of-Pocket Expenses-Effectiveness Analysis of Presurgical Teledermatology

Authors: Felipa De Mello-Sampayo

Abstract:

Background: The aim of this study is to undertake, from a patient perspective, an economic analysis of presurgical teledermatology, comparing it with a conventional referral system. Store-and-forward teledermatology allows surgical planning, saving both time and number of visits involving travel, thereby reducing patients’ out-of-pocket expenses, i.e., costs that patients incur when traveling to and from health providers for treatment, visits’ fees, and the opportunity cost of time spent in visits. Method: Patients’ out-of-pocket expenses-effectiveness of presurgical teledermatology were analyzed in the setting of a public hospital during two years. The mean delay in surgery was used to measure effectiveness. The teledermatology network covering the area served by the Hospital Garcia da Horta (HGO), Portugal, linked the primary care centers of 24 health districts with the hospital’s dermatology department. The patients’ opportunity cost of visits, travel costs, and visits’ fee of each presurgical modality (teledermatology and conventional referral), the cost ratio between the most and least expensive alternative, and the incremental cost-effectiveness ratio were calculated from initial primary care visit until surgical intervention. Two groups of patients: those with squamous cell carcinoma and those with basal cell carcinoma were distinguished in order to compare the effectiveness according to the dermatoses. Results: From a patient perspective, the conventional system was 2.15 times more expensive than presurgical teledermatology. Teledermatology had an incremental out-of-pocket expenses-effectiveness ratio of €1.22 per patient and per day of delay avoided. This saving was greater in patients with squamous cell carcinoma than in patients with basal cell carcinoma. Conclusion: From a patient economic perspective, teledermatology used for presurgical planning and preparation is the dominant strategy in terms of out-of-pocket expenses-effectiveness than the conventional referral system, especially for patients with severe dermatoses.

Keywords: economic analysis, out-of-pocket expenses, opportunity cost, teledermatology, waiting time

Procedia PDF Downloads 119
114 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder

Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo

Abstract:

Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.

Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion

Procedia PDF Downloads 97
113 The Use of Punctuation by Primary School Students Writing Texts Collaboratively: A Franco-Brazilian Comparative Study

Authors: Cristina Felipeto, Catherine Bore, Eduardo Calil

Abstract:

This work aims to analyze and compare the punctuation marks (PM) in school texts of Brazilian and French students and the comments on these PM made spontaneously by the students during the ongoing text. Assuming textual genetics as an investigative field within a dialogical and enunciative approach, we defined a common methodological design in two 1st year classrooms (7 years old) of the primary school, one classroom in Brazil (Maceio) and the other one in France (Paris). Through a multimodal capture system of writing processes in real time and space (Ramos System), we recorded the collaborative writing proposal in dyads in each of the classrooms. This system preserves the classroom’s ecological characteristics and provides a video recording synchronized with dialogues, gestures and facial expressions of the students, the stroke of the pen’s ink on the sheet of paper and the movement of the teacher and students in the classroom. The multimodal register of the writing process allowed access to the text in progress and the comments made by the students on what was being written. In each proposed text production, teachers organized their students in dyads and requested that they should talk, combine and write a fictional narrative. We selected a Dyad of Brazilian students (BD) and another Dyad of French students (FD) and we have filmed 6 proposals for each of the dyads. The proposals were collected during the 2nd Term of 2013 (Brazil) and 2014 (France). In 6 texts written by the BD there were identified 39 PMs and 825 written words (on average, a PM every 23 words): Of these 39 PMs, 27 were highlighted orally and commented by either student. In the texts written by the FD there were identified 48 PMs and 258 written words (on average, 1 PM every 5 words): Of these 48 PM, 39 were commented by the French students. Unlike what the studies on punctuation acquisition point out, the PM that occurred the most were hyphens (BD) and commas (FD). Despite the significant difference between the types and quantities of PM in the written texts, the recognition of the need for writing PM in the text in progress and the comments have some common characteristics: i) the writing of the PM was not anticipated in relation to the text in progress, then they were added after the end of a sentence or after the finished text itself; ii) the need to add punctuation marks in the text came after one of the students had ‘remembered’ that a particular sign was needed; iii) most of the PM inscribed were not related to their linguistic functions, but the graphic-visual feature of the text; iv) the comments justify or explain the PM, indicating metalinguistic reflections made by the students. Our results indicate how the comments of the BD and FD express the dialogic and subjective nature of knowledge acquisition. Our study suggests that the initial learning of PM depends more on its graphic features and interactional conditions than on its linguistic functions.

Keywords: collaborative writing, erasure, graphic marks, learning, metalinguistic awareness, textual genesis

Procedia PDF Downloads 141
112 Diaper Dermatitis and Pancytopenia as the Primary Manifestation in an Infant with Vitamin B12 Deficiency

Authors: Ekaterina Sánchez Romero, Emily Gabriela Aguirre Herrera, Sandra Luz Espinoza Esquerra, Jorge García Campos

Abstract:

Female, 7 months old, daughter of a mother with anemia during pregnancy, with no history of atopy in the family, since birth she presents with recurrent dermatological and gastrointestinal infections, chronically treated for recurrent diaper dermatitis. At 6 months of age, she begins with generalized pallor, hyperpigmentation in hands and feet, smooth tongue, psychomotor retardation with lack of head support, sedation, and hypoactivity. She was referred to our hospital for a fever of 38°C, severe diaper rash, and pancytopenia with HB 9.3, platelets 38000, neutrophils 0.39 MCV: 86.80 high for her age. The approach was initiated to rule out myeloproliferative syndrome, with negative immunohistochemical results of bone marrow aspirate; during her stay, she presented neurological regression, lack of sucking, and focal seizures. CT scan showed cortical atrophy. The patient was diagnosed with primary immunodeficiency due to history; gamma globulin was administered without improvement with normal results of immunoglobulins and metabolic screening. When dermatological and neurological diagnoses were ruled out as the primary cause, a nutritional factor was evaluated, and a therapeutic trial was started with the administration of vitamin B12 and zinc, presenting clinical neurological improvement and resolution of pancytopenia in 2 months. It was decided to continue outpatient management. Discussion: We present a patient with neurological, dermatological involvement, and pancytopenia, so the most common differential diagnoses in this population were ruled out. Vitamin B12 deficiency is an uncommon entity. Due to maternal and clinical history, a therapeutic trial was started resulting in an improvement. Conclusion: VitaminB12 deficiency should be considered one of the differential diagnoses in the approach to pancytopenia with megaloblastic anemia associated with dermatologic and neurologic manifestations. Early treatment can reduce irreversible damage in these patients.

Keywords: vitamin B12 deficiency, pediatrics, pancytopenia, diaper dermatitis

Procedia PDF Downloads 67
111 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 150
110 Comparative Ante-Mortem Studies through Electrochemical Impedance Spectroscopy, Differential Voltage Analysis and Incremental Capacity Analysis on Lithium Ion Batteries

Authors: Ana Maria Igual-Munoz, Juan Gilabert, Marta Garcia, Alfredo Quijano-Lopez

Abstract:

Nowadays, several lithium-ion battery technologies are being commercialized. These chemistries present different properties that make them more suitable for different purposes. However, comparative studies showing the advantages and disadvantages of different chemistries are incomplete or scarce. Different non-destructive techniques are currently being employed to detect how ageing affects the active materials of lithium-ion batteries (LIBs). For instance, electrochemical impedance spectroscopy (EIS) is one of the most employed ones. This technique allows the user to identify the variations on the different resistances present in LIBs. On the other hand, differential voltage analysis (DVA) has shown to be a powerful technique to detect the processes affecting the different capacities present in LIBs. This technique shows variations in the state of health (SOH) and the capacities for one or both electrodes depending on their chemistry. Finally, incremental capacity analysis (ICA) is a widely known technique for being capable of detecting phase equilibria. It reminds of the commonly used cyclic voltamperometry, as it allows detecting some reactions taking place in the electrodes. In these studies, a set of ageing procedures have been applied to commercial batteries of different chemistries (NCA, NMC, and LFP). Afterwards, results of EIS, DVA, and ICA have been used to correlate them with the processes affecting each cell. Ciclability, overpotential, and temperature cycling studies envisage how the charge-discharge rates, cut-off voltage, and operation temperatures affect each chemistry. These studies will serve battery pack manufacturers, as for common battery users, as they will determine the different conditions affecting cells for each of the chemistry. Taking this into account, each cell could be adjusted to the final purpose of the battery application. Last but not least, all the degradation parameters observed are focused to be integrated into degradation models in the future. This fact will allow the implementation of the widely known digital twins to the degradation in LIBs.

Keywords: lithium ion batteries, non-destructive analysis, different chemistries, ante-mortem studies, ICA, DVA, EIS

Procedia PDF Downloads 103
109 Total Plaque Area in Chronic Renal Failure

Authors: Hernán A. Perez, Luis J. Armando, Néstor H. García

Abstract:

Background and aims Cardiovascular disease rates are very high in patients with renal failure (CRF), but the underlying mechanisms are incompletely understood. Traditional cardiovascular risk factors do not explain the increased risk, and observational studies have observed paradoxical or absent associations between classical risk factors and mortality in dialysis patients. A large randomized controlled trial, the 4D Study, the AURORA and the ALERT study found that statin therapy in CRF do not reduce cardiovascular events. These results may be the results of ‘accelerated atherosclerosis’ observed on these patients. The objective of this study was to investigate if carotid total plaque area (TPA), a measure of carotid plaque burden growth is increased at progressively lower creatinine clearance in patients with CRF. We studied a cohort of patients with CRF not on dialysis, reasoning that risk factor associations might be more easily discerned before end stage renal disease. Methods: The Blossom DMO Argentina ethics committee approved the study and informed consent from each participant was obtained. We performed a cohort study in 412 patients with Stage 1, 2 and 3 CRF. Clinical and laboratory data were obtained. TPA was determined using bilateral carotid ultrasonography. Modification of Diet in Renal Disease estimation formula was used to determine renal function. ANOVA was used when appropriate. Results: Stage 1 CRF group (n= 16, 43±2yo) had a blood pressure of 123±2/78±2 mmHg, BMI 30±1, LDL col 145±10 mg/dl, HbA1c 5.8±0.4% and had the lowest TPA 25.8±6.9 mm2. Stage 2 CRF (n=231, 50±1 yo) had a blood pressure of 132±1/81±1 mmHg, LDL col 125±2 mg/dl, HbA1c 6±0.1% and TPA 48±10mm2 ( p< 0.05 vs CRF stage 1) while Stage 3 CRF (n=165, 59±1 yo) had a blood pressure of 134±1/81±1, LDL col 125±3 mg/dl, HbA1c 6±0.1% and TPA 71±6mm2 (p < 0.05 vs CRF stage 1 and 2). Conclusion: Our data indicate that TPA increases along the renal function deterioration, and it is not related with the LDL cholesterol and triglycerides levels. We suggest that mechanisms other than the classics are responsible for the observed excess of cardiovascular disease in CKD patients and finally, determination of total plaque area should be used to measure effects of antiatherosclerotic therapy.

Keywords: hypertension, chronic renal failure, atherosclerosis, cholesterol

Procedia PDF Downloads 247
108 Wicking Bed Cultivation System as a Strategic Proposal for the Cultivation of Milpa and Mexican Medicinal Plants in Urban Spaces

Authors: David Lynch Steinicke, Citlali Aguilera Lira, Andrea León García

Abstract:

The proposal posed in this work comes from a researching-action approach. In Mexico, a dialogue of knowledge may function as a link between traditional, local, pragmatic knowledge, and technological, scientific knowledge. The advantage of generating this nexus lies on the positive impact in the environment, in society and economy. This work attempts to combine, on the one hand the traditional Mexican knowledge such as the usage of medicinal herb and the agroecosystem milpa; and on the other hand make use of a newly created agricultural ecotechnology which main function is to take advantage of the urban space and to save water. This ecotechnology is the wicking bed. In a globalized world, is relevant to have a proposal where the most important aspect is to revalorize the culture through the acquisition of traditional knowledge but at the same time adapting them to the new social and urbanized structures without threatening the environment. The methodology used in this work comes from a researching-action approach combined with a practical dimension where an experimental model made of three wickingbeds was implemented. In this model, there were cultivated medicinal herb and milpa components. The water efficiency and the social acceptance were compared with a traditional ground crop, all this practice was made in an urban social context. The implementation of agricultural ecotechnology has had great social acceptance as its irrigation involves minimal effort and it is economically feasible for low-income people. The wicking bed system raised in this project is attainable to be implemented in schools, urban and peri-urban environments, homemade gardens and public areas. The proposal managed to carry out an innovative and sustainable knowledge-based traditional Mexican agricultural technology, allowing regain Milpa agroecosystem in urban environments to strengthen food security in favour of nutritional and protein benefits for the Mexican fare.

Keywords: milpa, traditional medicine, urban agriculture, wicking bed

Procedia PDF Downloads 360
107 Cognitivism in Classical Japanese Art and Literature: The Cognitive Value of Haiku and Zen Painting

Authors: Benito Garcia-Valero

Abstract:

This paper analyses the cognitivist value of traditional Japanese theories about aesthetics, art, and literature. These reflections were developed several centuries before actual Cognitive Studies, which started in the seventies of the last century. A comparative methodology is employed to shed light on the similarities between traditional Japanese conceptions about art and current cognitivist principles. The Japanese texts to be compared are Zeami’s treatise on noh art, Okura Toraaki’s Waranbe-gusa on kabuki theatre, and several Buddhist canonical texts about wisdom and knowledge, like the Prajnaparamitahrdaya or Heart Sutra. Japanese contemporary critical sources on these works are also referred, like Nishida Kitaro’s reflections on Zen painting or Ichikawa Hiroshi’s analysis of body/mind dualism in Japanese physical practices. Their ideas are compared with cognitivist authors like George Lakoff, Mark Johnson, Mark Turner and Margaret Freeman. This comparative review reveals the anticipatory ideas of Japanese thinking on body/mind interrelationship, which agrees with cognitivist criticism against dualism, since both elucidate the physical grounds acting upon the formation of concepts and schemes during the production of knowledge. It also highlights the necessity of recovering ancient Japanese treatises on cognition to continue enlightening current research on art and literature. The artistic examples used to illustrate the theory are Sesshu’s Zen paintings and Basho’s classical haiku poetry. Zen painting is an excellent field to demonstrate how monk artists conceived human perception and guessed the active role of beholders during the contemplation of art. On the other hand, some haikus by Matsuo Basho aim at factoring subjectivity out from artistic praxis, which constitutes an ideal of illumination that cannot be achieved using art, due to the embodied nature of perception; a constraint consciously explored by the poet himself. These ideas consolidate the conclusions drawn today by cognitivism about the interrelation between subject and object and the concept of intersubjectivity.

Keywords: cognitivism, dualism, haiku, Zen painting

Procedia PDF Downloads 114
106 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 329
105 A Bayesian Approach for Health Workforce Planning in Portugal

Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro

Abstract:

Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.

Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning

Procedia PDF Downloads 227
104 3D Interpenetrated Network Based on 1,3-Benzenedicarboxylate and 1,2-Bis(4-Pyridyl) Ethane

Authors: Laura Bravo-García, Gotzone Barandika, Begoña Bazán, M. Karmele Urtiaga, Luis M. Lezama, María I. Arriortua

Abstract:

Solid coordination networks (SCNs) are materials consisting of metal ions or clusters that are linked by polyfunctional organic ligands and can be designed to form tridimensional frameworks. Their structural features, as for example high surface areas, thermal stability, and in other cases large cavities, have opened a wide range of applications in fields like drug delivery, host-guest chemistry, biomedical imaging, chemical sensing, heterogeneous catalysis and others referred to greenhouse gases storage or even separation. In this sense, the use of polycarboxylate anions and dipyridyl ligands is an effective strategy to produce extended structures with the needed characteristics for these applications. In this context, a novel compound, [Cu4(m-BDC)4(bpa)2DMF]•DMF has been obtained by microwave synthesis, where m-BDC is 1,3-benzenedicarboxylate and bpa 1,2-bis(4-pyridyl)ethane. The crystal structure can be described as a three dimensional framework formed by two equal, interpenetrated networks. Each network consists of two different CuII dimers. Dimer 1 have two coppers with a square pyramidal coordination, and dimer 2 have one with a square pyramidal coordination and other with octahedral one, the last dimer is unique in literature. Therefore, the combination of both type of dimers is unprecedented. Thus, benzenedicarboxylate ligands form sinusoidal chains between the same type of dimers, and also connect both chains forming these layers in the (100) plane. These layers are connected along the [100] direction through the bpa ligand, giving rise to a 3D network with 10 Å2 voids in average. However, the fact that there are two interpenetrated networks results in a significant reduction of the available volume. Structural analysis was carried out by means of single crystal X-ray diffraction and IR spectroscopy. Thermal and magnetic properties have been measured by means of thermogravimetry (TG), X-ray thermodiffractometry (TDX), and electron paramagnetic resonance (EPR). Additionally, CO2 and CH4 high pressure adsorption measurements have been carried out for this compound.

Keywords: gas adsorption, interpenetrated networks, magnetic measurements, solid coordination network (SCN), thermal stability

Procedia PDF Downloads 296
103 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media

Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia

Abstract:

Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.

Keywords: CFD, reactor, ultrasound, wastewater

Procedia PDF Downloads 171
102 Study of the Kinetics of Formation of Carboxylic Acids Using Ion Chromatography during Oxidation Induced by Rancimat of the Oleic Acid, Linoleic Acid, Linolenic Acid, and Biodiesel

Authors: Patrícia T. Souza, Marina Ansolin, Eduardo A. C. Batista, Antonio J. A. Meirelles, Matthieu Tubino

Abstract:

Lipid oxidation is a major cause of the deterioration of the quality of the biodiesel, because the waste generated damages the engines. Among the main undesirable effects are the increase of viscosity and acidity, leading to the formation of insoluble gums and sediments which cause the blockage of fuel filters. The auto-oxidation is defined as the spontaneous reaction of atmospheric oxygen with lipids. Unsaturated fatty acids are usually the components affected by such reactions. They are present as free fatty acids, fatty esters and glycerides. To determine the oxidative stability of biodiesels, through the induction period, IP, the Rancimat method is used, which allows continuous monitoring of the induced oxidation process of the samples. During the oxidation of the lipids, volatile organic acids are produced as byproducts, in addition, other byproducts, including alcohols and carbonyl compounds, may be further oxidized to carboxylic acids. By the methodology developed in this work using ion chromatography, IC, analyzing the water contained in the conductimetric vessel, were quantified organic anions of carboxylic acids in samples subjected to oxidation induced by Rancimat. The optimized chromatographic conditions were: eluent water:acetone (80:20 v/v) with 0.5 mM sulfuric acid; flow rate 0.4 mL min-1; injection volume 20 µL; eluent suppressor 20 mM LiCl; analytical curve from 1 to 400 ppm. The samples studied were methyl biodiesel from soybean oil and unsaturated fatty acids standards: oleic, linoleic and linolenic. The induced oxidation kinetics curves were constructed by analyzing the water contained in the conductimetric vessels which were removed, each one, from the Rancimat apparatus at prefixed intervals of time. About 3 g of sample were used under the conditions of 110 °C and air flow rate of 10 L h-1. The water of each conductimetric Rancimat measuring vessel, where the volatile compounds were collected, was filtered through a 0.45 µm filter and analyzed by IC. Through the kinetic data of the formation of the organic anions of carboxylic acids, the formation rates of the same were calculated. The observed order of the rates of formation of the anions was: formate >>> acetate > hexanoate > valerate for the oleic acid; formate > hexanoate > acetate > valerate for the linoleic acid; formate >>> valerate > acetate > propionate > butyrate for the linolenic acid. It is possible to suppose that propionate and butyrate are obtained mainly from linolenic acid and that hexanoate is originated from oleic and linoleic acid. For the methyl biodiesel the order of formation of anions was: formate >>> acetate > valerate > hexanoate > propionate. According to the total rate of formation these anions produced during the induced degradation of the fatty acids can be assigned the order of reactivity: linolenic acid > linoleic acid >>> oleic acid.

Keywords: anions of carboxylic acids, biodiesel, ion chromatography, oxidation

Procedia PDF Downloads 443
101 Life Cycle Assessment of Biogas Energy Production from a Small-Scale Wastewater Treatment Plant in Central Mexico

Authors: Joel Bonales, Venecia Solorzano, Carlos Garcia

Abstract:

A great percentage of the wastewater generated in developing countries don’t receive any treatment, which leads to numerous environmental impacts. In response to this, a paradigm change in the current wastewater treatment model based on large scale plants towards a small and medium scale based model has been proposed. Nevertheless, small scale wastewater treatment (SS-WTTP) with novel technologies such as anaerobic digesters, as well as the utilization of derivative co-products such as biogas, still presents diverse environmental impacts which must be assessed. This study consisted in a Life Cycle Assessment (LCA) performed to a SS-WWTP which treats wastewater from a small commercial block in the city of Morelia, Mexico. The treatment performed in the SS-WWTP consists in anaerobic and aerobic digesters with a daily capacity of 5,040 L. Two different scenarios were analyzed: the current plant conditions and a hypothetical energy use of biogas obtained in situ. Furthermore, two different allocation criteria were applied: full impact allocation to the system’s main product (treated water) and substitution credits for replacing Mexican grid electricity (biogas) and clean water pumping (treated water). The results showed that the analyzed plant had bigger impacts than what has been reported in the bibliography in the basis of wastewater volume treated, which may imply that this plant is currently operating inefficiently. The evaluated impacts appeared to be focused in the aerobic digestion and electric generation phases due to the plant’s particular configuration. Additional findings prove that the allocation criteria applied is crucial for the interpretation of impacts and that that the energy use of the biogas obtained in this plant can help mitigate associated climate change impacts. It is concluded that SS-WTTP is a environmentally sound alternative for wastewater treatment from a systemic perspective. However, this type of studies must be careful in the selection of the allocation criteria and replaced products, since these factors have a great influence in the results of the assessment.

Keywords: biogas, life cycle assessment, small scale treatment, wastewater treatment

Procedia PDF Downloads 100
100 Incidence of Lymphoma and Gonorrhea Infection: A Retrospective Study

Authors: Diya Kohli, Amalia Ardeljan, Lexi Frankel, Jose Garcia, Lokesh Manjani, Omar Rashid

Abstract:

Gonorrhea is the second most common sexually transmitted disease (STDs) in the United States of America. Gonorrhea affects the urethra, rectum, or throat and the cervix in females. Lymphoma is a cancer of the immune network called the lymphatic system that includes the lymph nodes/glands, spleen, thymus gland, and bone marrow. Lymphoma can affect many organs in the body. When a lymphocyte develops a genetic mutation, it signals other cells into rapid proliferation that causes many mutated lymphocytes. Multiple studies have explored the incidence of cancer in people infected with STDs such as Gonorrhea. For instance, the studies conducted by Wang Y-C and Co., as well as Caini, S and Co. established a direct co-relationship between Gonorrhea infection and incidence of prostate cancer. We hypothesize that Gonorrhea infection also increases the incidence of Lymphoma in patients. This research study aimed to evaluate the correlation between Gonorrhea infection and the incidence of Lymphoma. The data for the research was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. This database was utilized to evaluate patients infected with Gonorrhea versus the ones who were not infected to establish a correlation with the prevalence of Lymphoma using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for academic research. Standard statistical methods were applied throughout. Between January 2010 and December 2019, the query was analyzed and resulted in 254 and 808 patients in both the infected and control group, respectively. The two groups were matched by Age Range and CCI score. The incidence of Lymphoma was 0.998% (254 patients out of 25455) in the Gonorrhea group (patients infected with Gonorrhea that was Lymphoma Positive) compared to 3.174% and 808 patients in the control group (Patients negative for Gonorrhea but with Lymphoma). This was statistically significant by a p-value < 2.210-16 with an OR= 0.431 (95% CI 0.381-0.487). The patients were then matched by antibiotic treatment to avoid treatment bias. The incidence of Lymphoma was 1.215% (82 patients out of 6,748) in the Gonorrhea group compared to 2.949% (199 patients out of 6748) in the control group. This was statistically significant by a p-value <5.410-10 with an OR= 0.468 (95% CI 0.367-0.596). The study shows a statistically significant correlation between Gonorrhea and a reduced incidence of Lymphoma. Further evaluation is recommended to assess the potential of Gonorrhea in reducing Lymphoma.

Keywords: gonorrhea, lymphoma, STDs, cancer, ICD

Procedia PDF Downloads 173
99 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 322
98 Modelling Tyre Rubber Materials for High Frequency FE Analysis

Authors: Bharath Anantharamaiah, Tomas Bouda, Elke Deckers, Stijn Jonckheere, Wim Desmet, Juan J. Garcia

Abstract:

Automotive tyres are gaining importance recently in terms of their noise emission, not only with respect to reduction in noise, but also their perception and detection. Tyres exhibit a mechanical noise generation mechanism up to 1 kHz. However, owing to the fact that tyre is a composite of several materials, it has been difficult to model it using finite elements to predict noise at high frequencies. The currently available FE models have a reliability of about 500 Hz, the limit which, however, is not enough to perceive the roughness or sharpness of noise from tyre. These noise components are important in order to alert pedestrians on the street about passing by slow, especially electric vehicles. In order to model tyre noise behaviour up to 1 kHz, its dynamic behaviour must be accurately developed up to a 1 kHz limit using finite elements. Materials play a vital role in modelling the dynamic tyre behaviour precisely. Since tyre is a composition of several components, their precise definition in finite element simulations is necessary. However, during the tyre manufacturing process, these components are subjected to various pressures and temperatures, due to which these properties could change. Hence, material definitions are better described based on the tyre responses. In this work, the hyperelasticity of tyre component rubbers is calibrated, using the design of experiments technique from the tyre characteristic responses that are measured on a stiffness measurement machine. The viscoelasticity of rubbers are defined by the Prony series for rubbers, which are determined from the loss factor relationship between the loss and storage moduli, assuming that the rubbers are excited within the linear viscoelasticity ranges. These values of loss factor are measured and theoretically expressed as a function of rubber shore hardness or hyperelasticities. From the results of the work, there exists a good correlation between test and simulation vibrational transfer function up to 1 kHz. The model also allows flexibility, i.e., the frequency limit can also be extended, if required, by calibrating the Prony parameters of rubbers corresponding to the frequency of interest. As future work, these tyre models are used for noise generation at high frequencies and thus for tyre noise perception.

Keywords: tyre dynamics, rubber materials, prony series, hyperelasticity

Procedia PDF Downloads 168
97 Effect of Phytohormones on the Development and Nutraceutical Characteristics of the Fruit Capsicum annuum

Authors: Rossy G. Olan Villegas, Gerardo Acosta Garcia, Aurea Bernardino Nicanor, Leopoldo Gonzalez Cruz, Humberto Ramirez Medina

Abstract:

Capsicum annuum is a crop of agricultural and economic importance in Mexico and other countries. The fruit (pepper) contains bioactive components such as carotenoids, phenolic compounds and capsaicinoids that improve health. However, pepper cultivation is affected by biotic and abiotic factors that decrease yield. Some phytohormones like gibberellins and auxins induce the formation and development of fruit in several plants. In this study, we evaluated the effect of the exogenous application of phytohormones like gibberellic acid and indolbutyric acid on fruit development of jalapeno pepper plants, the protein profile of plant tissues, the accumulation of bioactive compounds and antioxidant activity in the pericarp and seeds. For that, plants were sprinkled with these phytohormones. The fruit collection for the control, indolbutyric acid and gibberellic acid treatments was 7 peppers per plant; however, for the treatment that combines indolbutyric acid and gibberellic acid, a fruit with the shortest length (1.52 ± 1.00 cm) and weight (0.41 ± 1.0 g) was collected compared to fruits of plants grown under other treatments. The length (4,179 ± 0,130 cm) and weight of the fruit (8,949 ± 0.583 g) increased in plants treated with indolbutyric acid, but these characteristics decreased with the application of GA3 (length of 3,349 ± 0.127 cm and a weight 4,429 ± 0.144 g). The content of carotenes and phenolic compounds increased in plants treated with GA3 (1,733 ± 0.092 and 1,449 ± 0.009 mg / g, respectively) or indolbutyric acid (1,164 ± 0.042 and 0.970 ± 0.003 mg / g). However, this effect was not observed in plants treated with both phytohormones (0.238 ± 0.021 and 0.218 ± 0.004 mg / g). Capsaicin content was higher in all treatments; but it was more noticeable in plants treated with both phytohormones, the value being 0.913 ± 0.001 mg / g (three times greater in amount). The antioxidant activity was measured by 3 different assays, 2,2-diphenyl-1-picrylhydrazyl (DPPH), antioxidant power of ferric reduction (FRAP) and 2,2'-Azinobis-3-ethyl-benzothiazoline-6-sulfonic acid ( ABTS) to find the minimum inhibitory concentration of the reducing radical (IC50 and EC50). Significant differences were observed from the application of the phytohormone, being the fruits treated with gibberellins, which had a greater accumulation of bioactive compounds. Our results suggest that the application of phytohormones modifies the development of fruit and its content of bioactive compounds.

Keywords: auxins, capsaicinoids, carotenoids, gibberellins

Procedia PDF Downloads 91
96 Determination of the Structural Parameters of Calcium Phosphate for Biomedical Use

Authors: María Magdalena Méndez-González, Miguel García Rocha, Carlos Manuel Yermo De la Cruz

Abstract:

Calcium phosphate (Ca5(PO4)3(X)) is widely used in orthopedic applications and is widely used as powder and granules. However, their presence in bone is in the form of nanometric needles 60 nm in length with a non-stoichiometric phase of apatite contains CO3-2, Na+, OH-, F-, and other ions in a matrix of collagen fibers. The crystal size, morphology control and interaction with cells are essential for the development of nanotechnology. The structural results of calcium phosphate, synthesized by chemical precipitation with crystal size of 22.85 nm are presented in this paper. The calcium phosphate powders were analyzed by X-ray diffraction, energy dispersive spectroscopy (EDS), infrared spectroscopy and FT-IR transmission electron microscopy. Network parameters, atomic positions, the indexing of the planes and the calculation of FWHM (full width at half maximum) were obtained. The crystal size was also calculated using the Scherer equation d (hkl) = cλ/βcosѲ. Where c is a constant related to the shape of the crystal, the wavelength of the radiation used for a copper anode is 1.54060Å, Ѳ is the Bragg diffraction angle, and β is the width average peak height of greater intensity. Diffraction pattern corresponding to the calcium phosphate called hydroxyapatite phase of a hexagonal crystal system was obtained. It belongs to the space group P63m with lattice parameters a = 9.4394 Å and c = 6.8861 Å. The most intense peak is obtained 2Ѳ = 31.55 (FWHM = 0.4798), with a preferred orientation in 121. The intensity difference between the experimental data and the calculated values is attributable to the temperature at which the sintering was performed. The intensity of the highest peak is at angle 2Ѳ = 32.11. The structure of calcium phosphate obtained was a hexagonal configuration. The intensity changes in the peaks of the diffraction pattern, in the lattice parameters at the corners, indicating the possible presence of a dopant. That each calcium atom is surrounded by a tetrahedron of oxygen and hydrogen was observed by infrared spectra. The unit cell pattern corresponds to hydroxyapatite and transmission electron microscopic crystal morphology corresponding to the hexagonal phase with a preferential growth along the c-plane was obtained.

Keywords: structure, nanoparticles, calcium phosphate, metallurgical and materials engineering

Procedia PDF Downloads 478
95 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 358
94 Mortar Positioning Effects on Uniaxial Compression Behavior in Hollow Concrete Block Masonry

Authors: José Álvarez Pérez, Ramón García Cedeño, Gerardo Fajardo-San Miguel, Jorge H. Chávez Gómez, Franco A. Carpio Santamaría, Milena Mesa Lavista

Abstract:

The uniaxial compressive strength and modulus of elasticity in hollow concrete block masonry (HCBM) represent key mechanical properties for structural design considerations. These properties are obtained through experimental tests conducted on prisms or wallettes and depend on various factors, with the HCB contributing significantly to overall strength. One influential factor in the compressive behaviour of masonry is the thickness and method of mortar placement. Mexican regulations stipulate mortar placement over the entire net area (full-shell) for strength computation based on the gross area. However, in professional practice, there's a growing trend to place mortar solely on the lateral faces. Conversely, the United States of America standard dictates mortar placement and computation over the net area of HCB. The Canadian standard specifies mortar placement solely on the lateral face (Face-Shell-Bedding), where computation necessitates the use of the effective load area, corresponding to the mortar's placement area. This research aims to evaluate the influence of different mortar placement methods on the axial compression behaviour of HCBM. To achieve this, an experimental campaign was conducted, including: (1) 10 HCB specimens with mortar on the entire net area, (2) 10 HCB specimens with mortar placed on the lateral faces, (3) 10 prisms of 2-course HCB under axial compression with mortar in full-shell, (4) 10 prisms of 2-course HCB under axial compression with mortar in face-shell-bedding, (5) 10 prisms of 3-course HCB under axial compression with mortar in full-shell, (6) 10 prisms of 3-course HCB under axial compression with mortar in face-shell-bedding, (7) 10 prisms of 4-course HCB under axial compression with mortar in full-shell, and, (8) 10 prisms of 4-course HCB under axial compression with mortar in face-shell-bedding. A combination of sulphur and fly ash in a 2:1 ratio was used for the capping material, meeting the average compressive strength requirement of over 35 MPa as per NMX-C-036 standards. Additionally, a mortar with a strength of over 17 MPa was utilized for the prisms. The results indicate that prisms with mortar placed over the full-shell exhibit higher strength compared to those with mortar over the face-shell-bedding. However, the elastic modulus was lower for prisms with mortar placement over the full-shell compared to face-shell bedding.

Keywords: masonry, hollow concrete blocks, mortar placement, prisms tests

Procedia PDF Downloads 30
93 Performance and Specific Emissions of an SI Engine Using Anhydrous Ethanol–Gasoline Blends in the City of Bogota

Authors: Alexander García Mariaca, Rodrigo Morillo Castaño, Juan Rolón Ríos

Abstract:

The government of Colombia has promoted the use of biofuels in the last 20 years through laws and resolutions, which regulate their use, with the objective to improve the atmospheric air quality and to promote Colombian agricultural industry. However, despite the use of blends of biofuels with fossil fuels, the air quality in large cities does not get better, this deterioration in the air is mainly caused by mobile sources that working with spark ignition internal combustion engines (SI-ICE), operating with a mixture in volume of 90 % gasoline and 10 % ethanol called E10, that for the case of Bogota represent 84 % of the fleet. Another problem is that Colombia has big cities located above 2200 masl and there are no accurate studies on the impact that the E10 mixture could cause in the emissions and performance of SI-ICE. This study aims to establish the optimal blend between gasoline ethanol in which an SI engine operates more efficiently in urban centres located at 2600 masl. The test was developed on SI engine four-stroke, single cylinder, naturally aspirated and with carburettor for the fuel supply using blends of gasoline and anhydrous ethanol in different ratios E10, E15, E20, E40, E60, E85 and E100. These tests were conducted in the city of Bogota, which is located at 2600 masl, with the engine operating at 3600 rpm and at 25, 50, 75 and 100% of load. The results show that the performance variables as engine brake torque, brake power and brake thermal efficiency decrease, while brake specific fuel consumption increases with the rise in the percentage of ethanol in the mixture. On the other hand, the specific emissions of CO2 and NOx present increases while specific emissions of CO and HC decreases compared to those produced by gasoline. From the tests, it is concluded that the SI-ICE worked more efficiently with the E40 mixture, where was obtained an increases of the brake power of 8.81 % and a reduction on brake specific fuel consumption of 2.5 %, coupled with a reduction in the specific emissions of CO2, HC and CO in 9.72, 52.88 and 76.66 % respectively compared to the results obtained with the E10 blend. This behaviour is because the E40 mixture provides the appropriate amount of the oxygen for the combustion process, which leads to better utilization of available energy in this process, thus generating a comparable power output to the E10 mixing and producing lower emissions CO and HC with the other test blends. Nevertheless, the emission of NOx increases in 106.25 %.

Keywords: emissions, ethanol, gasoline, engine, performance

Procedia PDF Downloads 305
92 Phage Display-Derived Vaccine Candidates for Control of Bovine Anaplasmosis

Authors: Itzel Amaro-Estrada, Eduardo Vergara-Rivera, Virginia Juarez-Flores, Mayra Cobaxin-Cardenas, Rosa Estela Quiroz, Jesus F. Preciado, Sergio Rodriguez-Camarillo

Abstract:

Bovine anaplasmosis is an infectious, tick-borne disease caused mainly by Anaplasma marginale; typical signs include anemia, fever, abortion, weight loss, decreased milk production, jaundice, and potentially death. Sick bovine can recover when antibiotics are administered; however, it usually remains as carrier for life, being a risk of infection for susceptible cattle. Anaplasma marginale is an obligate intracellular Gram-negative bacterium with genetic composition highly diverse among geographical isolates. There are currently no vaccines fully effective against bovine anaplasmosis; therefore, the economic losses due to disease are present. Vaccine formulation became a hard task for several pathogens as Anaplasma marginale, but peptide-based vaccines are an interesting proposal way to induce specific responses. Phage-displayed peptide libraries have been proved one of the most powerful technologies for identifying specific ligands. Screening of these peptides libraries is also a tool for studying interactions between proteins or peptides. Thus, it has allowed the identification of ligands recognized by polyclonal antiserums, and it has been successful for the identification of relevant epitopes in chronic diseases and toxicological conditions. Protective immune response to bovine anaplasmosis includes high levels of immunoglobulins subclass G2 (IgG2) but not subclass IgG1. Therefore, IgG2 from the serum of protected bovine can be useful to identify ligands, which can be part of an immunogen for cattle. In this work, phage display random peptide library Ph.D. ™ -12 was incubating with IgG2 or blood sera of immunized bovines against A. marginale as targets. After three rounds of biopanning, several candidates were selected for additional analysis. Subsequently, their reactivity with sera immunized against A. marginale, as well as with positive and negative sera to A. marginale was evaluated by immunoassays. A collection of recognized peptides tested by ELISA was generated. More than three hundred phage-peptides were separately evaluated against molecules which were used during panning. At least ten different peptides sequences were determined from their nucleotide composition. In this approach, three phage-peptides were selected by their binding and affinity properties. In the case of the development of vaccines or diagnostic reagents, it is important to evaluate the immunogenic and antigenic properties of the peptides. Immunogenic in vitro and in vivo behavior of peptides will be assayed as synthetic and as phage-peptide for to determinate their vaccine potential. Acknowledgment: This work was supported by grant SEP-CONACYT 252577 given to I. Amaro-Estrada.

Keywords: bovine anaplasmosis, peptides, phage display, veterinary vaccines

Procedia PDF Downloads 116
91 Inverted Geometry Ceramic Insulators in High Voltage Direct Current Electron Guns for Accelerators

Authors: C. Hernandez-Garcia, P. Adderley, D. Bullard, J. Grames, M. A. Mamun, G. Palacios-Serrano, M. Poelker, M. Stutzman, R. Suleiman, Y. Wang, , S. Zhang

Abstract:

High-energy nuclear physics experiments performed at the Jefferson Lab (JLab) Continuous Electron Beam Accelerator Facility require a beam of spin-polarized ps-long electron bunches. The electron beam is generated when a circularly polarized laser beam illuminates a GaAs semiconductor photocathode biased at hundreds of kV dc inside an ultra-high vacuum chamber. The photocathode is mounted on highly polished stainless steel electrodes electrically isolated by means of a conical-shape ceramic insulator that extends into the vacuum chamber, serving as the cathode electrode support structure. The assembly is known as a dc photogun, which has to simultaneously meet the following criteria: high voltage to manage space charge forces within the electron bunch, ultra-high vacuum conditions to preserve the photocathode quantum efficiency, no field emission to prevent gas load when field emitted electrons impact the vacuum chamber, and finally no voltage breakdown for robust operation. Over the past decade, JLab has tested and implemented the use of inverted geometry ceramic insulators connected to commercial high voltage cables to operate a photogun at 200kV dc with a 10 cm long insulator, and a larger version at 300kV dc with 20 cm long insulator. Plans to develop a third photogun operating at 400kV dc to meet the stringent requirements of the proposed International Linear Collider are underway at JLab, utilizing even larger inverted insulators. This contribution describes approaches that have been successful in solving challenging problems related to breakdown and field emission, such as triple-point junction screening electrodes, mechanical polishing to achieve mirror-like surface finish and high voltage conditioning procedures with Kr gas to extinguish field emission.

Keywords: electron guns, high voltage techniques, insulators, vacuum insulation

Procedia PDF Downloads 93
90 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)

Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni

Abstract:

Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.

Keywords: numerical methods, finite difference method, heat transfer, Scilab

Procedia PDF Downloads 351
89 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 188
88 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 297
87 Accessible Mobile Augmented Reality App for Art Social Learning Based on Technology Acceptance Model

Authors: Covadonga Rodrigo, Felipe Alvarez Arrieta, Ana Garcia Serrano

Abstract:

Mobile augmented reality technologies have become very popular in the last years in the educational field. Researchers have studied how these technologies improve the engagement of the student and better understanding of the process of learning. But few studies have been made regarding the accessibility of these new technologies applied to digital humanities. The goal of our research is to develop an accessible mobile application with embedded augmented reality main characters of the art work and gamification events accompanied by multi-sensorial activities. The mobile app conducts a learning itinerary around the artistic work, driving the user experience in and out the museum. The learning design follows the inquiry-based methodology and social learning conducted through interaction with social networks. As for the software application, it’s being user-centered designed, following the universal design for learning (UDL) principles to assure the best level of accessibility for all. The mobile augmented reality application starts recognizing a marker from a masterpiece of a museum using the camera of the mobile device. The augmented reality information (history, author, 3D images, audio, quizzes) is shown through virtual main characters that come out from the art work. To comply with the UDL principles, we use a version of the technology acceptance model (TAM) to study the easiness of use and perception of usefulness, extended by the authors with specific indicators for measuring accessibility issues. Following a rapid prototype method for development, the first app has been recently produced, fulfilling the EN 301549 standard and W3C accessibility guidelines for mobile development. A TAM-based web questionnaire with 214 participants with different kinds of disabilities was previously conducted to gather information and feedback on user preferences from the artistic work on the Museo del Prado, the level of acceptance of technology innovations and the easiness of use of mobile elements. Preliminary results show that people with disabilities felt very comfortable while using mobile apps and internet connection. The augmented reality elements seem to offer an added value highly engaging and motivating for the students.

Keywords: H.5.1 (multimedia information systems), artificial, augmented and virtual realities, evaluation/methodology

Procedia PDF Downloads 105