Search results for: Hugo Alberto Herrera Fonseca
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 283

Search results for: Hugo Alberto Herrera Fonseca

73 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 163
72 Fatigue Influence on the Residual Stress State in Shot Peened Duplex Stainless Steel

Authors: P. D. Pedrosa, J. M. A. Rebello, M. P. Cindra Fonseca

Abstract:

Duplex stainless steels (DSS) exhibit a biphasic microstructure consisting of austenite and delta ferrite. Their high resistance to oxidation, and corrosion, even in H2S containing environments, allied to low cost when compared to conventional stainless steel, are some properties which make this material very attractive for several industrial applications. However, several of these industrial applications imposes cyclic loading to the equipments and in consequence fatigue damage needs to be a concern. A well-known way of improving the fatigue life of a component is by introducing compressive residual stress in its surface. Shot peening is an industrial working process which brings the material directly beneath component surface in a high mechanical compressive state, so inhibiting fatigue crack initiation. However, one must take into account the fact that the cyclic loading itself can reduce and even suppress these residual stresses, thus having undesirable consequences in the process of improving fatigue life by the introduction of compressive residual stresses. In the present work, shot peening was used to introduce residual stresses in several DSS samples. These were thereafter submitted to three different fatigue regimes: low, medium and high cycle fatigue. The evolution of the residual stress during loading were then examined on both surface and subsurface of the samples. It was used the DSS UNS S31803, with microstructure composed of 49% austenite and 51% ferrite. The treatment of shot peening was accomplished by the application of blasting in two Almen intensities of 0.25 and 0.39A. The residual stresses were measured by X-ray diffraction using the double exposure method and a portable equipment with CrK radiation and the (211) diffracting plane for the austenite phase and the (220) plane for the ferrite phase. It is known that residual stresses may arise when two regions of the same material experienced different degrees of plastic deformation. When these regions are separated in respect to each other on a scale that is large compared to the material's microstructure they are called macro stresses. In contrast, microstresses can largely vary over distances which are small comparable to the scale of the material's microstructure and must balance zero between the phases present. In the present work, special attention will be paid to the measurement of residual microstresses. Residual stress measurements were carried out in test pieces submitted to low, medium and high-cycle fatigue, in both longitudinal and transverse direction of the test pieces. It was found that after shot peening, the residual microstress is tensile in the austenite and compressive in the ferrite phases. It was hypothesized that the hardening behavior of the austenite after shot peening was probably due to its higher nitrogen content. Fatigue cycling can effectively change this stress state but this effect was found to be dependent of the shot peening intensity was well as the fatigue range.

Keywords: residual stresses, fatigue, duplex steel, shot peening

Procedia PDF Downloads 228
71 Gendered Experiences of the Urban Space in India as Portrayed by Hindi Cinema: A Quantitative Analysis

Authors: Hugo Ribadeau Dumas

Abstract:

In India, cities represent intense battlefields where patriarchal norms are simultaneously defied and reinforced. While Indian metropolises have witnessed numerous initiatives where women boldly claimed their right to the city, urban spaces still remain disproportionately unfriendly to female city-dwellers. As a result, the presence of strees (women, in Hindi) in the streets remains a socially and politically potent phenomenon. This paper explores how, in India, women engage with the city as compared to men. Borrowing analytical tools from urban geography, it uses Hindi cinema as a medium to map the extent to which activities, attitudes and experiences in urban spaces are highly gendered. The sample consists of 30 movies, both mainstream and independent, which were released between 2010 and 2020, were set in an urban environment and comprised at least one pivotal female character. The paper adopts a quantitative approach, consisting of the scrutiny of close to 3,000 minutes of footage, the labeling and time count of every scene, and the computation of regressions to identify statistical relationships between characters and the way they navigate the city. According to the analysis, female characters spend half less time in the public space than their male counterparts. When they do step out, women do it mostly for utilitarian reasons; inversely, in private spaces or in pseudo-public commercial places – like malls – they indulge in fun activities. For male characters, the pattern is the exact opposite: fun takes place in public and serious work in private. The characters’ attitudes in the streets are also greatly gendered: men spend a significant amount of time immobile, loitering, while women are usually on the move, displaying some sense of purpose. Likewise, body language and emotional expressiveness betray differentiated gender scripts: while women wander in the streets either smiling – in a charming role – or with a hostile face – in a defensive mode – men are more likely to adopt neutral facial expressions. These trends were observed across all movies, although some nuances were identified depending on the character's age group, social background, and city, highlighting that the urban experience is not the same for all women. The empirical pieces of evidence presented in this study are helpful to reflect on the meaning of public space in the context of contemporary Indian cities. The paper ends with a discussion on the link between universal access to public spaces and women's empowerment.

Keywords: cinema, Indian cities, public space, women empowerment

Procedia PDF Downloads 156
70 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case

Authors: Henry Acurio, Alvaro Corral, Juan Fonseca

Abstract:

In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility: 1) A Business as Usual (BAU) scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios, buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro-mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies by the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP), and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.

Keywords: electro mobility, energy, policy, sustainable transportation

Procedia PDF Downloads 81
69 Effective Doping Engineering of Na₃V₂(PO₄)₂F₃ as a High-Performance Cathode Material for Sodium-Ion Batteries

Authors: Ramon Alberto Paredes Camacho, Li Lu

Abstract:

Sustainable batteries are possible through the development of cheaper and greener alternatives whose most feasible option is epitomized by Sodium-Ion Batteries (SIB). Na₃V₂(PO₄)₂F₃ (NVPF) an important member of the Na-superionic-conductor (NASICON) materials, has recently been in the spotlight due to its interesting electrochemical properties when used as cathode namely, high specific capacity of 128 mA h g-¹, high energy density of 507 W h Kg-¹, increased working potential at which vanadium redox couples can be activated (with an average value around 3.9 V), and small volume variation of less than 2%. These traits grant NVPF an excellent perspective as a cathode material for the next generation of sodium batteries. Unfortunately, because of its low inherent electrical conductivity and a high energy barrier that impedes the mobilization of all the available Na ions per formula, the overall electrochemical performance suffers substantial degradation, finally obstructing its industrial use. Many approaches have been developed to remediate these issues where nanostructural design, carbon coating, and ion doping are the most effective ones. This investigation is focused on enhancing the electrochemical response of NVPF by doping metal ions in the crystal lattice, substituting vanadium atoms. A facile sol-gel process is employed, with citric acid as the chelator and the carbon source. The optimized conditions circumvent fluorine sublimation, ratifying the material’s purity. One of the reasons behind the large ionic improvement is the attraction of extra Na ions into the crystalline structure due to a charge imbalance produced by the valence of the doped ions (+2), which is lower than the one of vanadium (+3). Superior stability (higher than 90% at a current density of 20C) and capacity retention at an extremely high current density of 50C are demonstrated by our doped NVPF. This material continues to retain high capacity values at low and high temperatures. In addition, full cell NVPF//Hard Carbon shows capacity values and high stability at -20 and 60ºC. Our doping strategy proves to significantly increase the ionic and electronic conductivity of NVPF even at extreme conditions, delivering outstanding electrochemical performance and paving the way for advanced high-potential cathode materials.

Keywords: sodium-ion batteries, cathode materials, NASICON, Na3V2(PO4)2F3, Ion doping

Procedia PDF Downloads 57
68 Structural and Morphological Characterization of the Biomass of Aquatics Macrophyte (Egeria densa) Submitted to Thermal Pretreatment

Authors: Joyce Cruz Ferraz Dutra, Marcele Fonseca Passos, Rubens Maciel Filho, Douglas Fernandes Barbin, Gustavo Mockaitis

Abstract:

The search for alternatives to control hunger in the world, generated a major environmental problem. Intensive systems of fish production can cause an imbalance in the aquatic environment, triggering the phenomenon of eutrophication. Currently, there are many forms of growth control aquatic plants, such as mechanical withdrawal, however some difficulties arise for their final destination. The Egeria densa is a species of submerged aquatic macrophyte-rich in cellulose and low concentrations of lignin. By applying the concept of second generation energy, which uses lignocellulose for energy production, the reuse of these aquatic macrophytes (Egeria densa) in the biofuels production can turn an interesting alternative. In order to make lignocellulose sugars available for effective fermentation, it is important to use pre-treatments in order to separate the components and modify the structure of the cellulose and thus facilitate the attack of the microorganisms responsible for the fermentation. Therefore, the objective of this research work was to evaluate the structural and morphological transformations occurring in the biomass of aquatic macrophytes (E.densa) submitted to a thermal pretreatment. The samples were collected in an intensive fish growing farm, in the low São Francisco dam, in the northeastern region of Brazil. After collection, the samples were dried in a 65 0C ventilation oven and milled in a 5mm micron knife mill. A duplicate assay was carried, comparing the in natural biomass with the pretreated biomass with heat (MT). The sample (MT) was submitted to an autoclave with a temperature of 1210C and a pressure of 1.1 atm, for 30 minutes. After this procedure, the biomass was characterized in terms of degree of crystallinity and morphology, using X-ray diffraction (XRD) techniques and scanning electron microscopy (SEM), respectively. The results showed that there was a decrease of 11% in the crystallinity index (% CI) of the pretreated biomass, leading to the structural modification in the cellulose and greater presence of amorphous structures. Increases in porosity and surface roughness of the samples were also observed. These results suggest that biomass may become more accessible to the hydrolytic enzymes of fermenting microorganisms. Therefore, the morphological transformations caused by the thermal pretreatment may be favorable for a subsequent fermentation and, consequently, a higher yield of biofuels. Thus, the use of thermally pretreated aquatic macrophytes (E.densa) can be an environmentally, financially and socially sustainable alternative. In addition, it represents a measure of control for the aquatic environment, which can generate income (biogas production) and maintenance of fish farming activities in local communities.

Keywords: aquatics macrophyte, biofuels, crystallinity, morphology, pretreatment thermal

Procedia PDF Downloads 330
67 The Involvement of the Homing Receptors CCR7 and CD62L in the Pathogenesis of Graft-Versus-Host Disease

Authors: Federico Herrera, Valle Gomez García de Soria, Itxaso Portero Sainz, Carlos Fernández Arandojo, Mercedes Royg, Ana Marcos Jimenez, Anna Kreutzman, Cecilia MuñozCalleja

Abstract:

Introduction: Graft-versus-host disease (GVHD) still remains the major complication associated with allogeneic stem cell transplantation (SCT). The pathogenesis involves migration of donor naïve T-cells into recipient secondary lymphoid organs. Two molecules are important in this process: CD62L and CCR7, which are characteristically expressed in naïve/central memory T-cells. With this background, we aimed to study the influence of CCR7 and CD62L on donor lymphocytes in the development and severity of GVHD. Material and methods: This single center study included 98 donor-recipient pairs. Samples were collected prospectively from the apheresis product and phenotyped by flow cytometry. CCR7 and CD62L expression in CD4+ and CD8+ T-cells were compared between patients who developed acute (n=40) or chronic GVHD (n=33) and those who did not (n=38). Results: The patients who developed acute GVHD were transplanted with a higher percentage of CCR7+CD4+ T-cells (p = 0.05) compared to the no GVHD group. These results were confirmed when these patients were divided in degrees according to the severity of the disease; the more severe disease, the higher percentage of CCR7+CD4+ T-cells. Conversely, chronic GVHD patients received a higher percentage of CCR7+CD8+ T-cells (p=0.02) in comparison to those who did not develop the complication. These data were also confirmed when patients were subdivided in degrees of the disease severity. A multivariable analysis confirmed that percentage of CCR7+CD4+ T-cells is a predictive factor of acute GVHD whereas the percentage of CCR7+CD8+ T-cells is a predictive factor of chronic GVHD. In vitro functional assays (migration and activation assays) supported the idea of CCR7+ T-cells were involved in the development of GVHD. As low levels of CD62L expression were detected in all apheresis products, we tested the hypothesis that CD62L was shed during apheresis procedure. Comparing CD62L surface levels in T-cells from the same donor immediately before collecting the apheresis product, and the final apheresis product we found that this process down-regulated CD62L in both CD4+ and CD8+ T cells (p=0.008). Interestingly, when CD62L levels were analysed in days 30 or 60 after engraftment, they recovered to baseline (p=0.008). However, to investigate the relation between CD62L expression and the development of GVHD in the recipient samples after the engraftment, no differences were observed comparing patients with GVHD to those who did not develop the disease. Discussion: Our prospective study indicates that the CCR7+ T-cells from the donor, which include naïve and central memory T-cells, contain the alloreactive cells with a high ability to mediate GVHD (in the case of both migration and activation). Therefore we suggest that the proportion and functional properties of CCR7+CD4+ and CCR7+CD8+ T-cells in the apheresis could act as a predictive biomarker to both acute and chronic GVHD respectively. Importantly, our study precludes that CD62L is lost in the apheresis and therefore it is not a reliable biomarker for the development of GVHD.

Keywords: CCR7, CD62L, GVHD, SCT

Procedia PDF Downloads 287
66 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet

Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez

Abstract:

Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.

Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles

Procedia PDF Downloads 33
65 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 119
64 Cloning and Expression a Gene of β-Glucosidase from Penicillium echinulatum in Pichia pastoris

Authors: Amanda Gregorim Fernandes, Lorena Cardoso Cintra, Rosalia Santos Amorim Jesuino, Fabricia Paula De Faria, Marcio José Poças Fonseca

Abstract:

Bioethanol is one of the most promising biofuels and able to replace fossil fuels and reduce its different environmental impacts and can be generated from various agroindustrial waste. The Brazil is in first place in bioethanol production to be the largest producer of sugarcane. The bagasse sugarcane (SCB) has lignocellulose which is composed of three major components: cellulose, hemicellulose and lignin. Cellulose is a homopolymer of glucose units connected by glycosidic linkages. Among all species of Penicillium, Penicillium echinulatum has been the focus of attention because they produce high quantities of cellulase and the mutant strain 9A02S1 produces higher enzyme levels compared to the wild. Among the cellulases, the cellobiohydrolases enzymes are the main components of the cellulolytic system of fungi, and are also responsible for most of the potential hydrolytic in enzyme cocktails for the industrial processing of plant biomass and several cellobiohydrolases Penicillium had higher specific activity against cellulose compared to CBH I from Trichoderma reesei. This fact makes it an interesting pattern for higher yields in the enzymatic hydrolysis, and also they are important enzymes in the hydrolysis of crystalline regions of cellulose. Therefore, finding new and more active enzymes become necessary. Meanwhile, β-glycosidases act on soluble substrates and are highly dependent on cellobiohydrolases and endoglucanases action to provide the substrate in the hydrolysis of the biomass, but the cellobiohydrolases and endoglucanases are highly dependent β-glucosidases to maintain efficient hydrolysis. Thus, there is a need to understand the structure-function relationships that govern the catalytic activity of cellulolytic enzymes to elucidate its mechanism of action and optimize its potential as industrial biocatalysts. To evaluate the enzyme β-glucosidase of Penicillium echinulatum (PeBGL1) the gene was synthesized from the assembly sequence from a library in induction conditions and then the PeBGL1 gene was cloned in the vector pPICZαA and transformed into P. pastoris GS115. After processing, the producers of PeBGL1 were analyzed for enzyme activity and protein profile where a band of approximately 100 kDa was viewed. It was also carried out the zymogram. In partial characterization it was determined optimum temperature of 50°C and optimum pH of 6,5. In addition, to increase the secreted recombinant PeBGL1 production by Pichia pastoris, three parameters of P. pastoris culture medium were analysed: methanol, nitrogen source concentrations and the inoculum size. A 23 factorial design was effective in achieving the optimum condition. Altogether, these results point to the potential application of this P. echinulatum β-glucosidase in hydrolysis of cellulose for the production of bioethanol.

Keywords: bioethanol, biotechnology, beta-glucosidase, penicillium echinulatum

Procedia PDF Downloads 242
63 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case

Authors: Henry Gonzalo Acurio Flores, Alvaro Nicolas Corral Naveda, Juan Francisco Fonseca Palacios

Abstract:

In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility. 1) A Business as Usual BAU scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies at the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP) and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.

Keywords: electro mobility, energy, policy, sustainable transportation

Procedia PDF Downloads 84
62 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 143
61 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region

Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho

Abstract:

The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.

Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon

Procedia PDF Downloads 66
60 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C

Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner

Abstract:

Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applications

Keywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity

Procedia PDF Downloads 83
59 Parametric Study for Obtaining the Structural Response of Segmental Tunnels in Soft Soil by Using No-Linear Numerical Models

Authors: Arturo Galván, Jatziri Y. Moreno-Martínez, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado

Abstract:

In recent years, one of the methods most used for the construction of tunnels in soft soil is the shield-driven tunneling. The advantage of this construction technique is that it allows excavating the tunnel while at the same time a primary lining is placed, which consists of precast segments. There are joints between segments, also called longitudinal joints, and joints between rings (called as circumferential joints). This is the reason because of this type of constructions cannot be considered as a continuous structure. The effect of these joints influences in the rigidity of the segmental lining and therefore in its structural response. A parametric study was performed to take into account the effect of different parameters in the structural response of typical segmental tunnels built in soft soil by using non-linear numerical models based on Finite Element Method by means of the software package ANSYS v. 11.0. In the first part of this study, two types of numerical models were performed. In the first one, the segments were modeled by using beam elements based on Timoshenko beam theory whilst the segment joints were modeled by using inelastic rotational springs considering the constitutive moment-rotation relation proposed by Gladwell. In this way, the mechanical behavior of longitudinal joints was simulated. On the other hand for simulating the mechanical behavior of circumferential joints elastic springs were considered. As well as, the stability given by the soil was modeled by means of elastic-linear springs. In the second type of models, the segments were modeled by means of three-dimensional solid elements and the joints with contact elements. In these models, the zone of the joints is modeled as a discontinuous (increasing the computational effort) therefore a discrete model is obtained. With these contact elements the mechanical behavior of joints is simulated considering that when the joint is closed, there is transmission of compressive and shear stresses but not of tensile stresses and when the joint is opened, there is no transmission of stresses. This type of models can detect changes in the geometry because of the relative movement of the elements that form the joints. A comparison between the numerical results with two types of models was carried out. In this way, the hypothesis considered in the simplified models were validated. In addition, the numerical models were calibrated with (Lab-based) experimental results obtained from the literature of a typical tunnel built in Europe. In the second part of this work, a parametric study was performed by using the simplified models due to less used computational effort compared to complex models. In the parametric study, the effect of material properties, the geometry of the tunnel, the arrangement of the longitudinal joints and the coupling of the rings were studied. Finally, it was concluded that the mechanical behavior of segment and ring joints and the arrangement of the segment joints affect the global behavior of the lining. As well as, the effect of the coupling between rings modifies the structural capacity of the lining.

Keywords: numerical models, parametric study, segmental tunnels, structural response

Procedia PDF Downloads 229
58 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics

Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García

Abstract:

Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.

Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics

Procedia PDF Downloads 299
57 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 292
56 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports

Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer

Abstract:

Preventing piping fatigue flow induced vibration in the Oil & Gas sector demands not only the constant development of engineering design methodologies based on available software packages, but also special piping support technologies for designing safe and reliable piping systems. The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically intrinsically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se, and the pipework associated steel support structure. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system in order to perform the required piping modification. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the type of process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces, the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap), and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it was possible to resolve the piping vibration problem throughout redesigning adequately the existing static piping supports and by adding new viscous dampers supports. This was conducted on-stream at the oil crude pipeline in question without the necessity of reducing the production of the plant. Concluding that the application of the methodology of this paper can be applied to solve similar cases in a straightforward manner.

Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper

Procedia PDF Downloads 143
55 Nanostructured Multi-Responsive Coatings for Tuning Surface Properties

Authors: Suzanne Giasson, Alberto Guerron

Abstract:

Stimuli-responsive polymer coatings can be used as functional elements in nanotechnologies, such as valves in microfluidic devices, as membranes in biomedical engineering, as substrates for the culture of biological tissues or in developing nanomaterials for targeted therapies in different diseases. However, such coatings usually suffer from major shortcomings, such as a lack of selectivity and poor environmental stability. The study will present multi-responsive hierarchical and hybrid polymer-based coatings aiming to overcome some of these limitations. Hierarchical polymer coatings, consisting of two-dimensional arrays of thermo-responsive cationic PNIPAM-based microgels and surface-functionalized with non-responsive or pH-responsive polymers, were covalently grafted to substrates to tune the surface chemistry and the elasticity of the surface independently using different stimuli. The characteristic dimensions (i.e., layer thickness) and surface properties (i.e., adhesion, friction) of the microgel coatings were assessed using the Surface Forces Apparatus. The ability to independently control the swelling and surface properties using temperature and pH as triggers were investigated for microgels in aqueous suspension and microgels immobilized on substrates. Polymer chain grafting did not impede the ability of cationic PNIPAM microgels to undergo a volume phase transition above the VPTT, either in suspension or immobilized on a substrate. Due to the presence of amino groups throughout the entirety of the microgel polymer network, the swelling behavior was also pH dependent. However, the thermo-responsive swelling was more significant than the pH-triggered one. The microgels functionalized with PEG exhibited the most promising behavior. Indeed, the thermo-triggered swelling of microgel-co-PEG did not give rise to changes in the microgel surface properties (i.e., surface potential and adhesion) within a wide range of pH values. It was possible for the immobilized microgel-co-PEG to undergo a volume transition (swelling/shrinking) with no change in adhesion, suggesting that the surface of the thermal-responsive microgels remains rather hydrophilic above the VPTT. This work confirms the possibility of tuning the swelling behavior of microgels without changing the adhesive properties. Responsive surfaces whose swelling properties can be reversibly and externally altered over space and time regardless of the surface chemistry are very innovative and will enable revolutionary advances in technologies, particularly in biomedical surface engineering and microfluidics, where advanced assembly of functional components is increasingly required.

Keywords: responsive materials, polymers, surfaces, cell culture

Procedia PDF Downloads 76
54 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 116
53 Manufacturing and Calibration of Material Standards for Optical Microscopy in Industrial Environments

Authors: Alberto Mínguez-Martínez, Jesús De Vicente Y Oliva

Abstract:

It seems that we live in a world in which the trend in industrial environments is the miniaturization of systems and materials and the fabrication of parts at the micro-and nano-scale. The problem arises when manufacturers want to study the quality of their production. This characteristic is becoming crucial due to the evolution of the industry and the development of Industry 4.0. As Industry 4.0 is based on digital models of production and processes, having accurate measurements becomes capital. At this point, the metrology field plays an important role as it is a powerful tool to ensure more stable production to reduce scrap and the cost of non-conformities. The most extended measuring instruments that allow us to carry out accurate measurements at these scales are optical microscopes, whether they are traditional, confocal, focus variation microscopes, profile projectors, or any other similar measurement system. However, the accuracy of measurements is connected to the traceability of them to the SI unit of length (the meter). The fact of providing adequate traceability to 2D and 3D dimensional measurements at micro-and nano-scale in industrial environments is a problem that is being studied, and it does not have a unique answer. In addition, if commercial material standards for micro-and nano-scale are considered, we can find that there are two main problems. On the one hand, those material standards that could be considered complete and very interesting do not give traceability of dimensional measurements and, on the other hand, their calibration is very expensive. This situation implies that these kinds of standards will not succeed in industrial environments and, as a result, they will work in the absence of traceability. To solve this problem in industrial environments, it becomes necessary to have material standards that are easy to use, agile, adaptive to different forms, cheap to manufacture and, of course, traceable to the definition of meter with simple methods. By using these ‘customized standards’, it would be possible to adapt and design measuring procedures for each application and manufacturers will work with some traceability. It is important to note that, despite the fact that this traceability is clearly incomplete, this situation is preferable to working in the absence of it. Recently, it has been demonstrated the versatility and the utility of using laser technology and other AM technologies to manufacture customized material standards. In this paper, the authors propose to manufacture a customized material standard using an ultraviolet laser system and a method to calibrate it. To conclude, the results of the calibration carried out in an accredited dimensional metrology laboratory are presented.

Keywords: industrial environment, material standards, optical measuring instrument, traceability

Procedia PDF Downloads 122
52 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 151
51 Analysis of a Faience Enema Found in the Assasif Tomb No. -28- of the Vizier Amenhotep Huy: Contributions to the Study of the Mummification Ritual Practiced in the Theban Necropolis

Authors: Alberto Abello Moreno-Cid

Abstract:

Mummification was the process through which immortality was granted to the deceased, so it was of extreme importance to the Egyptians. The techniques of embalming had evolved over the centuries, and specialists created increasingly sophisticated tools. However, due to its eminently religious nature, knowledge about everything related to this practice was jealously preserved, and the testimonies that have survived to our time are scarce. For this reason, embalming instruments found in archaeological excavations are uncommon. The tomb of the Vizier Amenhotep Huy (AT No. -28-), located in the el-Assasif necropolis that is being excavated since 2009 by the team of the Institute of Ancient Egyptian Studies, has been the scene of some discoveries of this type that evidences the existence of mummification practices in this place after the New Kingdom. The clysters or enemas are the fundamental tools in the second type of mummification described by the historian Herodotus to introduce caustic solutions inside the body of the deceased. Nevertheless, such objects only have been found in three locations: the tomb of Ankh-Hor in Luxor, where a copper enema belonged to the prophet of Ammon Uah-ib-Ra came to light; the excavation of the tomb of Menekh-ib-Nekau in Abusir, where was also found one made of copper; and the excavations in the Bucheum, where two more artifacts were discovered, also made of copper but in different shapes and sizes. Both of them were used for the mummification of sacred animals and this is the reason they vary significantly. Therefore, the object found in the tomb No. -28-, is the first known made of faience of all these peculiar tools and the oldest known until now, dated in the Third Intermediate Period (circa 1070-650 B.C.). This paper bases its investigation on the study of those parallelisms, the material, the current archaeological context and the full analysis and reconstruction of the object in question. The key point is the use of faience in the production of this item: creating a device intended to be in constant use seems to be a first illogical compared to other samples made of copper. Faience around the area of Deir el-Bahari had a strong religious component, associated with solar myths and principles of the resurrection, connected to the Osirian that characterises the mummification procedure. The study allows to refute some of the premises which are held unalterable in Egyptology, verifying the utilization of these sort of pieces, understanding its way of use and showing that this type of mummification was also applied to the highest social stratum, in which case the tools were thought out of an exceptional quality and religious symbolism.

Keywords: clyster, el-Assasif, embalming, faience enema mummification, Theban necropolis

Procedia PDF Downloads 111
50 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 23
49 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine

Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez

Abstract:

An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.

Keywords: blade, dynamic, fsi, wind turbine

Procedia PDF Downloads 482
48 The Relationship between 21st Century Digital Skills and the Intention to Start a Digit Entrepreneurship

Authors: Kathrin F. Schneider, Luis Xavier Unda Galarza

Abstract:

In our modern world, few are the areas that are not permeated by digitalization: we use digital tools for work, study, entertainment, and daily life. Since technology changes rapidly, skills must adapt to the new reality, which gives a dynamic dimension to the set of skills necessary for people's academic, professional, and personal success. The concept of 21st-century digital skills, which includes skills such as collaboration, communication, digital literacy, citizenship, problem-solving, critical thinking, interpersonal skills, creativity, and productivity, have been widely discussed in the literature. Digital transformation has opened many economic opportunities for entrepreneurs for the development of their products, financing possibilities, and product distribution. One of the biggest advantages is the reduction in cost for the entrepreneur, which has opened doors not only for the entrepreneur or the entrepreneurial team but also for corporations through intrapreneurship. The development of students' general literacy level and their digital competencies is crucial for improving the effectiveness and efficiency of the learning process, as well as for students' adaptation to the constantly changing labor market. The digital economy allows a free substantial increase in the supply share of conditional and also innovative products; this is mainly achieved through 5 ways to reduce costs according to the conventional digital economy: search costs, replication, transport, tracking, and verification. Digital entrepreneurship worldwide benefits from such achievements. There is an expansion and democratization of entrepreneurship thanks to the use of digital technologies. The digital transformation that has been taking place in recent years is more challenging for developing countries, as they have fewer resources available to carry out this transformation while offering all the necessary support in terms of cybersecurity and educating their people. The degree of digitization (use of digital technology) in a country and the levels of digital literacy of its people often depend on the economic level and situation of the country. Telefónica's Digital Life Index (TIDL) scores are strongly correlated with country wealth, reflecting the greater resources that richer countries can contribute to promoting "Digital Life". According to the Digitization Index, Ecuador is in the group of "emerging countries", while Chile, Colombia, Brazil, Argentina, and Uruguay are in the group of "countries in transition". According to Herrera Espinoza et al. (2022), there are startups or digital ventures in Ecuador, especially in certain niches, but many of the ventures do not exceed six months of creation because they arise out of necessity and not out of the opportunity. However, there is a lack of relevant research, especially empirical research, to have a clearer vision. Through a self-report questionnaire, the digital skills of students will be measured in an Ecuadorian private university, according to the skills identified as the six 21st-century skills. The results will be put to the test against the variable of the intention to start a digital venture measured using the theory of planned behavior (TPB). The main hypothesis is that high digital competence is positively correlated with the intention to start digital entrepreneurship.

Keywords: new literacies, digital transformation, 21st century skills, theory of planned behavior, digital entrepreneurship

Procedia PDF Downloads 105
47 Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France

Authors: Gashin Shahsavari, Gilles Arnaud-Fassetta, Alberto Campisano, Roberto Bertilotti, Fabien Riou

Abstract:

Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understanding its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grain-size) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.

Keywords: bed-load evolution, combined sewer systems, flushing efficiency, sediments transport

Procedia PDF Downloads 403
46 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch

Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón

Abstract:

Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.

Keywords: extrusion, resistant starch, snack (3G), Sorghum

Procedia PDF Downloads 309
45 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 155
44 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance

Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.

Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning

Procedia PDF Downloads 30