Search results for: Carlos Gomez Cubero
80 Dogs Chest Homogeneous Phantom for Image Optimization
Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano
Abstract:
In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.Keywords: radiation protection, phantom, veterinary radiology, computed radiography
Procedia PDF Downloads 41779 Effect of Several Soil Amendments on Water Quality in Mine Soils: Leaching Columns
Authors: Carmela Monterroso, Marc Romero-Estonllo, Carlos Pascual, Beatriz Rodríguez-Garrido
Abstract:
The mobilization of heavy metals from polluted soils causes their transfer to natural waters, with consequences for ecosystems and human health. Phytostabilization techniques are applied to reduce this mobility, through the establishment of a vegetal cover and the application of soil amendments. In this work, the capacity of different organic amendments to improve water quality and reduce the mobility of metals in mine-tailings was evaluated. A field pilot test was carried out with leaching columns installed on an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE/ Phy2SUDOE Projects (SOE1/P5/E0189 and SOE4/P5/E1021)). Ten columns (1 meter high by 25 cm in diameter) were packed with untreated mine tailings (control) or those treated with organic amendments. Applied amendments were based on different combinations of municipal wastes, bark chippings, biomass fly ash, and nanoparticles like aluminum oxides or ferrihydrite-type iron oxides. During the packing of the columns, rhizon-samplers were installed at different heights (10, 20, and 50 cm) from the top, and pore water samples were obtained by suction. Additionally, in each column, a bottom leachate sample was collected through a valve installed at the bottom of the column. After packing, the columns were sown with grasses. Water samples were analyzed for: pH and redox potential, using combined electrodes; salinity by conductivity meter: bicarbonate by titration, sulfate, nitrate, and chloride, by ion chromatography (Dionex 2000); phosphate by colorimetry with ammonium molybdate/ascorbic acid; Ca, Mg, Fe, Al, Mn, Zn, Cu, Cd, and Pb by flame atomic absorption/emission spectrometry (Perkin Elmer). Porewater and leachate from the control columns (packed with unamended mine tailings) were extremely acidic and had a high concentration of Al, Fe, and Cu. In these columns, no plant development was observed. The application of organic amendments improved soil conditions, which allowed the establishment of a dense cover of grasses in the rest of the columns. The combined effect of soil amendment and plant growth had a positive impact on water quality and reduced mobility of aluminum and heavy metals.Keywords: leaching, organic amendments, phytostabilization, polluted soils
Procedia PDF Downloads 11078 Virtual Reality and Other Real-Time Visualization Technologies for Architecture Energy Certifications
Authors: Román Rodríguez Echegoyen, Fernando Carlos López Hernández, José Manuel López Ujaque
Abstract:
Interactive management of energy certification ratings has remained on the sidelines of the evolution of virtual reality (VR) despite related advances in architecture in other areas such as BIM and real-time working programs. This research studies to what extent VR software can help the stakeholders to better understand energy efficiency parameters in order to obtain reliable ratings assigned to the parts of the building. To evaluate this hypothesis, the methodology has included the construction of a software prototype. Current energy certification systems do not follow an intuitive data entry system; neither do they provide a simple or visual verification of the technical values included in the certification by manufacturers or other users. This software, by means of real-time visualization and a graphical user interface, proposes different improvements to the current energy certification systems that ease the understanding of how the certification parameters work in a building. Furthermore, the difficulty of using current interfaces, which are not friendly or intuitive for the user, means that untrained users usually get a poor idea of the grounds for certification and how the program works. In addition, the proposed software allows users to add further information, such as financial and CO₂ savings, energy efficiency, and an explanatory analysis of results for the least efficient areas of the building through a new visual mode. The software also helps the user to evaluate whether or not an investment to improve the materials of an installation is worth the cost of the different energy certification parameters. The evaluated prototype (named VEE-IS) shows promising results when it comes to representing in a more intuitive and simple manner the energy rating of the different elements of the building. Users can also personalize all the inputs necessary to create a correct certification, such as floor materials, walls, installations, or other important parameters. Working in real-time through VR allows for efficiently comparing, analyzing, and improving the rated elements, as well as the parameters that we must enter to calculate the final certification. The prototype also allows for visualizing the building in efficiency mode, which lets us move over the building to analyze thermal bridges or other energy efficiency data. This research also finds that the visual representation of energy efficiency certifications makes it easy for the stakeholders to examine improvements progressively, which adds value to the different phases of design and sale.Keywords: energetic certification, virtual reality, augmented reality, sustainability
Procedia PDF Downloads 18677 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant
Authors: Elenice Maria Schons Silva, Andre Carlos Silva
Abstract:
The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.Keywords: collectors, depressants, flotation, mineral processing
Procedia PDF Downloads 15276 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals
Authors: Carlos Teodoro, Oscar Bautista
Abstract:
In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation
Procedia PDF Downloads 21675 The Desirable Construction of Urbanity in Spaces for Public Use
Authors: Giselly Barros Rodrigues, Carlos Leite de Souza
Abstract:
In recent years, there has been a great discussion about urbanism, the right to the city, the search for the public space and the occupation and appropriation of people in the spaces of the city. This movement happens all over the world and also in the great Brazilian metropolises. The more human-friendly city - the desirable construction of urbanity - as well as the encouragement of walking or bicycling to the detriment of cars is one of the major issues addressed by urban planners and challenges in the process of reviewing regulatory frameworks. The fact is that even if there are public spaces or space for public use in private areas - it is essential that there be, besides a project focused on the people and the use of space, a good management not to generate excess of control and consequently the segregation between different ethnicities, classes or creed. With the insertion of the Strategic Master Plan of Sao Paulo (2014), there is great incentive for them to implement - in the private spaces - of mixed uses and active facades (Services and commerce in the basement of buildings), these incentives will generate a city for people in the medium and long term. This research seeks to discuss the extent to which these spaces are democratic, what their perceptions are in relation to the space of public use in private areas and why this perception may be the one that was originally idealized. For this study, we carried out bibliographic reviews where applied research were carried out in three case studies listed in Sao Paulo. Questionnaires were also applied to the actors who gave answers regarding their perceptions and how they were approached in the places analyzed. After analyzing the material, it was verified that in the three case studies analyzed, sitting on the floor is prohibited. In the two places in Paulista Avenue (Cetenco Plaza and Square of Mall Cidade Sao Paulo) there was no problem whatsoever in relation to the clothes or attitudes of the actors in the streets of Paulista Avenue in Sao Paulo city. Different from what happened in the Itaim neighborhood (Brascan Century Plaza), with more conservative characteristics, where the actors were heavily watched by security and observed by others due to their clothes and attitudes in that area. The city of Sao Paulo is slowly changing, people are increasingly looking for places of quality in public use in their daily lives. The Strategic Master Plan of Sao Paulo (2014) and the Legislation approved in 2016 envision a city more humane and people-oriented in the future. It is up to the private sector, the public, and society to work together so that this glimpse becomes an abundant reality in every city, generating quality of life and urbanity for all.Keywords: urbanity, space for public use, appropriation of space, segregation
Procedia PDF Downloads 23774 Informational Habits and Ideology as Predictors for Political Efficacy: A Survey Study of the Brazilian Political Context
Authors: Pedro Cardoso Alves, Ana Lucia Galinkin, José Carlos Ribeiro
Abstract:
Political participation, can be a somewhat tricky subject to define, not in small part due to the constant changes in the concept fruit of the effort to include new forms of participatory behavior that go beyond traditional institutional channels. With the advent of the internet and mobile technologies, defining political participation has become an even more complicated endeavor, given de amplitude of politicized behaviors that are expressed throughout these mediums, be it in the very organization of social movements, in the propagation of politicized texts, videos and images, or in the micropolitical behaviors that are expressed in daily interaction. In fact, the very frontiers that delimit physical and digital spaces have become ever more diluted due to technological advancements, leading to a hybrid existence that is simultaneously physical and digital, not limited, as it once was, to the temporal limitations of classic communications. Moving away from those institutionalized actions of traditional political behavior, an idea of constant and fluid participation, which occurs in our daily lives through conversations, posts, tweets and other digital forms of expression, is discussed. This discussion focuses on the factors that precede more direct forms of political participation, interpreting the relation between informational habits, ideology, and political efficacy. Though some of the informational habits can be considered political participation, by some authors, a distinction is made to establish a logical flow of behaviors leading to participation, that is, one must gather and process information before acting on it. To reach this objective, a quantitative survey is currently being applied in Brazilian social media, evaluating feelings of political efficacy, social and economic issue-based ideological stances and informational habits pertaining to collection, fact-checking, and diversity of sources and ideological positions present in the participant’s political information network. The measure being used for informational habits relies strongly on a mix of information literacy and political sophistication concepts, bringing a more up-to-date understanding of information and knowledge production and processing in contemporary hybrid (physical-digital) environments. Though data is still being collected, preliminary analysis point towards a strong correlation between information habits and political efficacy, while ideology shows a weaker influence over efficacy. Moreover, social ideology and economic ideology seem to have a strong correlation in the sample, such intermingling between social and economic ideals is generally considered a red flag for political polarization.Keywords: political efficacy, ideology, information literacy, cyberpolitics
Procedia PDF Downloads 23473 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators
Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín
Abstract:
Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator
Procedia PDF Downloads 20872 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 3071 Rheological and Sensory Attributes of Dough and Crackers Including Amaranth Flour (Amaranthus spp.)
Authors: Claudia Cabezas-Zabala, Jairo Lindarte-Artunduaga, Carlos Mario Zuluaga-Dominguez
Abstract:
Amaranth is an emerging pseudocereal rich in such essential nutrients as protein and dietary fiber, which was employed as an ingredient in the formulation of crackers to evaluate the rheological performance and sensory acceptability of the obtained food. A completely randomized factorial design was used with two factors: (A) ratio of wheat and amaranth flour used in the preparation of the dough, in proportion 90:10 and 80:20 (% w/w) and (B) two levels of inulin addition of 8.4% and 16.7 %, having two control doughs made from amaranth and wheat flour, respectively. Initially, the functional properties of the formulations mentioned were measured, showing no significant differences in the water absorption capacity (WAC) and swelling power (SP), having mean values between 1.66 and 1.81 g/g for WAC and between 1.75 and 1.86 g/g for SP, respectively. The amaranth flour had the highest water holding capacity (WHR) of 8.41 ± 0.15 g/g and emulsifying activity (EA) of 74.63 ± 1.89 g/g. Moreover, the rheological behavior, measured through the use of farinograph, extensograph, Mixolab, and falling index, showed that the formulation containing 20% of amaranth flour and 7.16% of inulin had a rheological behavior similar to the control produced exclusively with wheat flour, being the former, the one selected for the preparation of crackers. For this formulation, the farinograph showed a mixing tolerance index of 11 UB, indicating a strong and cohesive dough; likewise, the Mixolab showed dough reaches stability at 6.47 min, indicating a good resistance to mixing. On the other hand, the extensograph exhibited a dough resistance of 637 UB, as well as extensibility of 13.4 mm, which corresponds to a strong dough capable of resisting the laminate. Finally, the falling index was 318 s, which indicates the crumb will retain enough air to enhance the crispness of a characteristic cracker. Finally, a sensory consumer test did not show significant differences in the evaluation of aroma between the control and the selected formulation, while this latter had a significantly lower rating in flavor. However, a purchase intention of 70 % was observed among the population surveyed. The results obtained in this work give perspectives for the industrial use of amaranth in baked goods. Additionally, amaranth has been a product typically linked to indigenous populations in the Andean South American countries; therefore, the search for diversification and alternatives of use for this pseudocereal has an impact on the social and economic conditions of such communities. The technological versatility and nutritional quality of amaranth is an advantage for consumers, favoring the consumption of healthy products with important contributions of dietary fiber and protein.Keywords: amaranth, crackers, rheology, pseudocereals, kneaded products
Procedia PDF Downloads 11870 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend
Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes
Abstract:
This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.Keywords: diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions
Procedia PDF Downloads 35069 Understanding the Underutilization of Electroconvulsive Therapy in Children and Adolescents
Authors: Carlos M. Goncalves, Luisa Duarte, Teresa Cartaxo
Abstract:
The aim of this work was to understand the reasons behind the underutilization of electroconvulsive therapy (ECT) in the younger population and raise possible solutions. We conducted a non-systematic review of literature throughout a search on PubMed, using the terms ‘children’, ‘adolescents’ and ‘electroconvulsive’, ‘therapy’. Candidate articles written in languages other than English were excluded. Articles were selected according to title and/or abstract’s content relevance, resulting in a total of 5 articles. ECT is a recognized effective treatment in adults for several psychiatric conditions. As in adults, ECT in children and adolescents is proven most beneficial in the treatment of severe mood disorders, catatonia, and, to a lesser extent, schizophrenia. ECT in adults has also been used to treat autism’s self-injurious behaviours, Tourette’s syndrome and resistant first-episode schizophrenia disorder. Despite growing evidence on its safety and effectiveness in children and adolescents, like those found in adults, ECT remains a controversial and underused treatment in patients this age, even when it is clearly indicated. There are various possible reasons to this; limited awareness among professionals (lack of knowledge and experience among child psychiatrists), stigmatic public opinion (despite positive feedback from patients and families, there is an unfavourable and inaccurate representation in the media, contributing to a negative public opinion), legal restrictions and ethical controversies (restrictive regulations such as a minimum age for administration), lack of randomized trials (the currently available studies are retrospective, with small size samples, and most of the publications are either case reports or case series). This shows the need to raise awareness and knowledge, not only for mental health professionals, but also to the general population, through the media, regarding indications, methods and safety of ECT in order to provide reliable information to the patient and families. Large-scale longitudinal studies are also useful to further demonstrate the efficacy and safety of ECT and can aid in the formulation of algorithms and guidelines as without these changes, the availability of ECT to the younger population will remain restricted by regulations and social stigma. In conclusion, these results highlight that lack of adequate knowledge and accurate information are the most important factors behind the underutilization of ECT in younger population. Mental healthcare professionals occupy a cornerstone position; if data is given by a well-informed healthcare professional instead of the media, general population (including patients and their families) will probably regard the procedure in a more favourable way. So, the starting point should be to improve health care professional’s knowledge and experience on this choice of treatment.Keywords: adolescents, children, electroconvulsive, therapy
Procedia PDF Downloads 12468 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel
Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego
Abstract:
Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments
Procedia PDF Downloads 23067 Coping Strategies of Female English Teachers and Housewives to Face the Challenges Associated to the COVID-19 Pandemic Lockdown
Authors: Lisseth Rojas Barreto, Carlos Muñoz Hernández
Abstract:
The COVID-19 pandemic led to many abrupt changes, including a prolonged lockdown, which brought about work and personal challenges to the population worldwide. Among the most affected populations are women who are workers and housewives at the same time, and especially those who are also parenting. These women were faced with the challenge to perform their usual varied roles during the lockdown from the same physical space, which inevitably had strong repercussions for each of them. This paper will present some results of a research study whose main objective was to examine the possible effects that the COVID-19 pandemic lockdown may have caused in the work, social, family, and personal environments of female English teachers who are also housewives and, by extension in the teaching and learning processes that they lead. Participants included five female English language teachers of a public foreign language school, they are all married, and two of them have children. Similarly, we examined some of the coping strategies these teachers used to tackle the pandemic-related challenges in their different roles, especially those used for their language teaching role; coping strategies are understood as a repertoire of behaviors in response to incidents that can be stressful for the subject, possible challenging events or situations that involve emotions with behaviors and decision-making of people which are used in order to find a meaning or positive result (Lazarus &Folkman, 1986) Following a qualitative-case study design, we gathered the data through a survey and a focus group interview with the participant teachers who work at a public language school in southern Colombia. Preliminary findings indicate that the circumstances that emerged as a result of the pandemic lockdown affected the participants in different ways, including financial, personal, family, health, and work-related issues. Among the strategies that participants found valuable to deal with the novel circumstances, we can highlight the reorganization of the household and work tasks and the increased awareness of time management for the household, work, and leisure. Additionally, we were able to evidence that the participants faced the circumstances with a positive view. Finally, in order to cope with their teaching duties, some participants acknowledged their lack of computer or technology literacy in order to deliver their classes online, which made them find support from their students or more knowledgeable peers to cope with it. Others indicated that they used strategies such as self-learning in order to get acquainted and be able to use the different technological tools and web-based platforms available.Keywords: coping strategies, language teaching, female teachers, pandemic lockdown
Procedia PDF Downloads 10666 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 8865 Negative Environmental Impacts on Marine Seismic Survey Activities
Authors: Katherine Del Carmen Camacho Zorogastua, Victor Hugo Gallo Ramos, Jhon Walter Gomez Lora
Abstract:
Marine hydrocarbon exploration (oil and natural gas) activities are developed using 2D, 3D and 4D seismic prospecting techniques where sound waves are directed from a seismic vessel emitted every few seconds depending on the variety of air compressors, which cross the layers of rock at the bottom of the sea and are reflected to the surface of the water. Hydrophones receive and record the reflected energy signals for cross-sectional mapping of the lithological profile in order to identify possible areas where hydrocarbon deposits can be formed. However, they produce several significant negative environmental impacts on the marine ecosystem and in the social and economic sectors. Therefore, the objective of the research is to publicize the negative impacts and environmental measures that must be carried out during the development of these activities to prevent and mitigate water quality, the population involved (fishermen) and the marine biota (e.g., Cetaceans, fish) that are the most vulnerable. The research contains technical environmental aspects based on bibliographic sources of environmental studies approved by the Peruvian authority, research articles, undergraduate and postgraduate theses, books, guides, and manuals from Spain, Australia, Canada, Brazil, and Mexico. It describes the negative impacts on the environment and population (fishing sector), environmental prevention, mitigation, recovery and compensation measures that must be properly implemented and the cases of global sea species stranding, for which international experiences from Spain, Madagascar, Mexico, Ecuador, Uruguay, and Peru were referenced. Negative impacts on marine fauna, seawater quality, and the socioeconomic sector (fishermen) were identified. Omission or inadequate biological monitoring in mammals could alter their ability to communicate, feed, and displacement resulting in their stranding and death. In fish, they cause deadly damage to physical-physiological type and in their behavior. Inadequate wastewater treatment and waste management could increase the organic load and oily waste on seawater quality in violation of marine flora and fauna. The possible estrangement of marine resources (fish) affects the economic sector as they carry out their fishing activity for consumption or sale. Finally, it is concluded from the experiences gathered from Spain, Madagascar, Mexico, Ecuador, Uruguay, and Peru that there is a cause and effect relationship between the inadequate development of seismic exploration activities (cause) and marine species strandings (effect) since over the years, stranded or dead marine mammals have been detected on the shores of the sea in areas of seismic acquisition of hydrocarbons. In this regard, it is recommended to establish technical procedures, guidelines, and protocols for the monitoring of marine species in order to contribute to the conservation of hydrobiological resources.Keywords: 3D seismic prospecting, cetaceans, significant environmental impacts, prevention, mitigation, recovery, environmental compensation
Procedia PDF Downloads 18564 Exploratory Study to Obtain a Biolubricant Base from Transesterified Oils of Animal Fats (Tallow)
Authors: Carlos Alfredo Camargo Vila, Fredy Augusto Avellaneda Vargas, Debora Alcida Nabarlatz
Abstract:
Due to the current need to implement environmentally friendly technologies, the possibility of using renewable raw materials to produce bioproducts such as biofuels, or in this case, to produce biolubricant bases, from residual oils (tallow), originating has been studied of the bovine industry. Therefore, it is hypothesized that through the study and control of the operating variables involved in the reverse transesterification method, a biolubricant base with high performance is obtained on a laboratory scale using animal fats from the bovine industry as raw materials, as an alternative for material recovery and environmental benefit. To implement this process, esterification of the crude tallow oil must be carried out in the first instance, which allows the acidity index to be decreased ( > 1 mg KOH/g oil), this by means of an acid catalysis with sulfuric acid and methanol, molar ratio 7.5:1 methanol: tallow, 1.75% w/w catalyst at 60°C for 150 minutes. Once the conditioning has been completed, the biodiesel is continued to be obtained from the improved sebum, for which an experimental design for the transesterification method is implemented, thus evaluating the effects of the variables involved in the process such as the methanol molar ratio: improved sebum and catalyst percentage (KOH) over methyl ester content (% FAME). Finding that the highest percentage of FAME (92.5%) is given with a 7.5:1 methanol: improved tallow ratio and 0.75% catalyst at 60°C for 120 minutes. And although the% FAME of the biodiesel produced does not make it suitable for commercialization, it does ( > 90%) for its use as a raw material in obtaining biolubricant bases. Finally, once the biodiesel is obtained, an experimental design is carried out to obtain biolubricant bases using the reverse transesterification method, which allows the study of the effects of the biodiesel: TMP (Trimethylolpropane) molar ratio and the percentage of catalyst on viscosity and yield as response variables. As a result, a biolubricant base is obtained that meets the requirements of ISO VG (Classification for industrial lubricants according to ASTM D 2422) 32 (viscosity and viscosity index) for commercial lubricant bases, using a 4:1 biodiesel molar ratio: TMP and 0.51% catalyst at 120°C, at a pressure of 50 mbar for 180 minutes. It is necessary to highlight that the product obtained consists of two phases, a liquid and a solid one, being the first object of study, and leaving the classification and possible application of the second one incognito. Therefore, it is recommended to carry out studies of the greater depth that allows characterizing both phases, as well as improving the method of obtaining by optimizing the variables involved in the process and thus achieving superior results.Keywords: biolubricant base, bovine tallow, renewable resources, reverse transesterification
Procedia PDF Downloads 11563 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 25862 Thermoregulatory Responses of Holstein Cows Exposed to Intense Heat Stress
Authors: Rodrigo De A. Ferrazza, Henry D. M. Garcia, Viviana H. V. Aristizabal, Camilla De S. Nogueira, Cecilia J. Verissimo, Jose Roberto Sartori, Roberto Sartori, Joao Carlos P. Ferreira
Abstract:
Environmental factors adversely influence sustainability in livestock production system. Dairy herds are the most affected by heat stress among livestock industries. This clearly implies in development of new strategies for mitigating heat, which should be based on physiological and metabolic adaptations of the animal. In this study, we incorporated the effect of climate variables and heat exposure time on the thermoregulatory responses in order to clarify the adaptive mechanisms for bovine heat dissipation under intense thermal stress induced experimentally in climate chamber. Non-lactating Holstein cows were contemporaneously and randomly assigned to thermoneutral (TN; n=12) or heat stress (HS; n=12) treatments during 16 days. Vaginal temperature (VT) was measured every 15 min with a microprocessor-controlled data logger (HOBO®, Onset Computer Corporation, Bourne, MA, USA) attached to a modified vaginal controlled internal drug release insert (Sincrogest®, Ourofino, Brazil). Rectal temperature (RT), respiratory rate (RR) and heart rate (HR) were measured twice a day (0700 and 1500h) and dry matter intake (DMI) was estimated daily. The ambient temperature and air relative humidity were 25.9±0.2°C and 73.0±0.8%, respectively for TN, and 36.3± 0.3°C and 60.9±0.9%, respectively for HS. Respiratory rate of HS cows increased immediately after exposure to heat and was higher (76.02±1.70bpm; P<0.001) than TN (39.70±0.71bpm), followed by rising of RT (39.87°C±0.07 for HS versus 38.56±0.03°C for TN; P<0.001) and VT (39.82±0.10°C for HS versus 38.26±0.03°C for TN; P<0.001). A diurnal pattern was detected, with higher (P<0.01) afternoon temperatures than morning and this effect was aggravated for HS cows. There was decrease (P<0.05) of HR for HS cows (62.13±0.99bpm) compared to TN (66.23±0.79bpm), but the magnitude of the differences was not the same over time. From the third day, there was a decrease of DMI for HS in attempt to maintain homeothermy, while TN cows increased DMI (8.27kg±0.33kg d-1 for HS versus 14.03±0.29kg d-1 for TN; P<0.001). By regression analysis, RT and RR better reflected the response of cows to changes in the Temperature Humidity Index and the effect of climate variables from the previous day to influence the physiological parameters and DMI was more important than the current day, with ambient temperature the most important factor. Comparison between acute (0 to 3 days) and chronic (13 to 16 days) exposure to heat stress showed decreasing of the slope of the regression equations for RR and DMI, suggesting an adaptive adjustment, however with no change for RT. In conclusion, intense heat stress exerted strong influence on the thermoregulatory mechanisms, but the acclimation process was only partial.Keywords: acclimation, bovine, climate chamber, hyperthermia, thermoregulation
Procedia PDF Downloads 21861 Innovation Outputs from Higher Education Institutions: A Case Study of the University of Waterloo, Canada
Authors: Wendy De Gomez
Abstract:
The University of Waterloo is situated in central Canada in the Province of Ontario- one hour from the metropolitan city of Toronto. For over 30 years, it has held Canada’s top spot as the most innovative university; and has been consistently ranked in the top 25 computer science and top 50 engineering schools in the world. Waterloo benefits from the federal government’s over 100 domestic innovation policies which have assisted in the country’s 15th place global ranking in the World Intellectual Property Organization’s (WIPO) 2022 Global Innovation Index. Yet undoubtedly, the University of Waterloo’s unique characteristics are what propels its innovative creativeness forward. This paper will provide a contextual definition of innovation in higher education and then demonstrate the five operational attributes that contribute to the University of Waterloo’s innovative reputation. The methodology is based on statistical analyses obtained from ranking bodies such as the QS World University Rankings, a secondary literature review related to higher education innovation in Canada, and case studies that exhibit the operationalization of the attributes outlined below. The first attribute is geography. Specifically, the paper investigates the network structure effect of the Toronto-Waterloo high-tech corridor and the resultant industrial relationships built there. The second attribute is University Policy 73-Intellectal Property Rights. This creator-owned policy grants all ownership to the creator/inventor regardless of the use of the University of Waterloo property or funding. Essentially, through the incentivization of IP ownership by all researchers, further commercialization and entrepreneurship are formed. Third, this IP policy works hand in hand with world-renowned business incubators such as the Accelerator Centre in the dedicated research and technology park and velocity, a 14-year-old facility that equips and guides founders to build and scale companies. Communitech, a 25-year-old provincially backed facility in the region, also works closely with the University of Waterloo to build strong teams, access capital, and commercialize products. Fourth, Waterloo’s co-operative education program contributes 31% of all co-op participants to the Canadian economy. Home to the world’s largest co-operative education program, data shows that over 7,000 from around the world recruit Waterloo students for short- and long-term placements- directly contributing to the student’s ability to learn and optimize essential employment skills when they graduate. Finally, the students themselves at Waterloo are exceptional. The entrance average ranges from the low 80s to the mid-90s depending on the program. In computer, electrical, mechanical, mechatronics, and systems design engineering, to have a 66% chance of acceptance, the applicant’s average must be 95% or above. Singularly, none of these five attributes could lead to the university’s outstanding track record of innovative creativity, but when bundled up into a 1000 acre- 100 building main campus with 6 academic faculties, 40,000+ students, and over 1300 world-class faculty, the recipe for success becomes quite evident.Keywords: IP policy, higher education, economy, innovation
Procedia PDF Downloads 7060 Cuban's Supply Chains Development Model: Qualitative and Quantitative Impact on Final Consumers
Authors: Teresita Lopez Joy, Jose A. Acevedo Suarez, Martha I. Gomez Acosta, Ana Julia Acevedo Urquiaga
Abstract:
Current trends in business competitiveness indicate the need to manage businesses as supply chains and not in isolation. The use of strategies aimed at maximum satisfaction of customers in a network and based on inter-company cooperation; contribute to obtaining successful joint results. In the Cuban economic context, the development of productive linkages to achieve integrated management of supply chains is considering a key aspect. In order to achieve this jump, it is necessary to develop acting capabilities in the entities that make up the chains through a systematic procedure that allows arriving at a management model in consonance with the environment. The objective of the research focuses on: designing a model and procedure for the development of integrated management of supply chains in economic entities. The results obtained are: the Model and the Procedure for the Development of the Supply Chains Integrated Management (MP-SCIM). The Model is based on the development of logistics in the network actors, the joint work between companies, collaborative planning and the monitoring of a main indicator according to the end customers. The application Procedure starts from the well-founded need for development in a supply chain and focuses on training entrepreneurs as doers. The characterization and diagnosis is done to later define the design of the network and the relationships between the companies. It takes into account the feedback as a method of updating the conditions and way to focus the objectives according to the final customers. The MP-SCIM is the result of systematic work with a supply chain approach in companies that have consolidated as coordinators of their network. The cases of the edible oil chain and explosives for construction sector reflect results of more remarkable advances since they have applied this approach for more than 5 years and maintain it as a general strategy of successful development. The edible oil trading company experienced a jump in sales. In 2006, the company started the analysis in order to define the supply chain, apply diagnosis techniques, define problems and implement solutions. The involvement of the management and the progressive formation of performance capacities in the personnel allowed the application of tools according to the context. The company that coordinates the explosives chain for construction sector shows adequate training with independence and opportunity in the face of different situations and variations of their business environment. The appropriation of tools and techniques for the analysis and implementation of proposals is a characteristic feature of this case. The coordinating entity applies integrated supply chain management to its decisions based on the timely training of the necessary action capabilities for each situation. Other cases of study and application that validate these tools are also detailed in this paper, and they highlight the results of generalization in the quantitative and qualitative improvement according to the final clients. These cases are: teaching literature in universities, agricultural products of local scope and medicine supply chains.Keywords: integrated management, logistic system, supply chain management, tactical-operative planning
Procedia PDF Downloads 15359 The Democracy of Love and Suffering in the Erotic Epigrams of Meleager
Authors: Carlos A. Martins de Jesus
Abstract:
The Greek anthology, first put together in the tenth century AD, gathers in two separate books a large number of epigrams devoted to love and its consequences, both of hetero (book V) and homosexual (book XII) nature. While some poets wrote epigrams of only one genre –that is the case of Strato (II cent. BC), the organizer of a wide-spread garland of homosexual epigrams –, several others composed within both categories, often using the same topics of love and suffering. Using Plato’s theorization of two different kinds of Eros (Symp. 180d-182a), the popular (pandemos) and the celestial (ouranios), homoerotic epigrammatic love is more often associated with the first one, while heterosexual poetry tends to be connected to a higher form of love. This paper focuses on the epigrammatic production of a single first-century BC poet, Meleager, aiming to look for the similarities and differences on singing both kinds of love. From Meleager, the Greek Anthology –a garland whose origins have been traced back to the poet’s garland itself– preserves more than sixty heterosexual and 48 homosexual epigrams, an important and unprecedented amount of poems that are able to trace a complete profile of his way of singing love. Meleager’s poetry deals with personal experience and emotions, frequently with love and the unhappiness that usually comes from it. Most times he describes himself not as an active and engaged lover, but as one struck by the beauty of a woman or boy, i.e., in a stage prior to erotic consummation. His epigrams represent the unreal and fantastic (literally speaking) world of the lover, in which the imagery and wordplays are used to convey emotion in the epigrams of both genres. Elsewhere Meleager surprises the reader by offering a surrealist or dreamlike landscape where everyday adventures are transcribed into elaborate metaphors for erotic feeling. For instance, in 12.81, the lovers are shipwrecked, and as soon as they have disembarked, they are promptly kidnapped by a figure who is both Eros and a beautiful boy. Particularly –and worth-to-know why significant – in the homosexual poems collected in Book XII, mythology also plays an important role, namely in the figure and the scene of Ganimedes’ kidnap by Zeus for his royal court (12. 70, 94). While mostly refusing the Hellenistic model of dramatic love epigram, in which a small everyday scene is portrayed –and 5. 182 is a clear exception to this almost rule –, Meleager actually focuses on the tumultuous inside of his (poetic) lovers, in the realm of a subject that feels love and pain far beyond his/her erotic preferences. In relation to loving and suffering –mostly suffering, it has to be said –, Meleager’s love is therefore completely democratic. There is no real place in his epigrams for the traditional association mentioned before between homoeroticism and a carnal-erotic-pornographic love, while the heterosexual one being more evenly and pure, so to speak.Keywords: epigram, erotic epigram, Greek Anthology, Meleager
Procedia PDF Downloads 25458 Orange Leaves and Rice Straw on Methane Emission and Milk Production in Murciano-Granadina Dairy Goat Diet
Authors: Tamara Romero, Manuel Romero-Huelva, Jose V. Segarra, Jose Castro, Carlos Fernandez
Abstract:
Many foods resulting from processing and manufacturing end up as waste, most of which is burned, dumped into landfills or used as compost, which leads to wasted resources, and environmental problems due to unsuitable disposal. Using residues of the crop and food processing industries to feed livestock has the advantage to obviating the need for costly waste management programs. The main residue generated in citrus cultivations and rice crop are pruning waste and rice straw, respectively. Within Spain, the Valencian Community is one of the world's oldest citrus and rice production areas. The objective of this experiment found out the effects of including orange leaves and rice straw as ingredients in the concentrate diets of goats, on milk production and methane (CH₄) emissions. Ten Murciano-Granadina dairy goats (45 kg of body weight, on average) in mid-lactation were selected in a crossover design experiment, where each goat received two treatments in 2 periods. Both groups were fed with 1.7 kg pelleted mixed ration; one group (n= 5) was a control (C) and the other group (n= 5) used orange leaves and rice straw (OR). The forage was alfalfa hay, and it was the same for the two groups (1 kg of alfalfa was offered by goat and day). The diets employed to achieve the requirements during lactation period for caprine livestock. The goats were allocated to individual metabolism cages. After 14 days of adaptation, feed intake and milk yield were recorded daily over a 5 days period. Physico-chemical parameters and somatic cell count in milk samples were determined. Then, gas exchange measurements were recorded individually by an open-circuit indirect calorimetry system using a head box. The data were analyzed by mixed model with diet and digestibility as fixed effect and goat as random effect. No differences were found for dry matter intake (2.23 kg/d, on average). Higher milk yield was found for C diet than OR (2.3 vs. 2.1 kg/goat and day, respectively) and, greater milk fat content was observed for OR than C (6.5 vs. 5.5%, respectively). The cheese extract was also greater in OR than C (10.7 vs. 9.6%). Goats fed OR diet produced significantly fewer CH₄ emissions than C diet (27 vs. 30 g/d, respectively). These preliminary results (LIFE Project LOWCARBON FEED LIFE/CCM/ES/000088) suggested that the use of these waste by-products was effective in reducing CH₄ emission without detrimental effect on milk yield.Keywords: agricultural waste, goat, milk production, methane emission
Procedia PDF Downloads 14857 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos
Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling
Abstract:
Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.Keywords: boredom, engagement, music videos, posture, proxemics
Procedia PDF Downloads 16756 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals
Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen
Abstract:
Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.Keywords: care, learning, nursing, technology
Procedia PDF Downloads 13655 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 7254 Thermoplastic-Intensive Battery Trays for Optimum Electric Vehicle Battery Pack Performance
Authors: Dinesh Munjurulimana, Anil Tiwari, Tingwen Li, Carlos Pereira, Sreekanth Pannala, John Waters
Abstract:
With the rapid transition to electric vehicles (EVs) across the globe, car manufacturers are in need of integrated and lightweight solutions for the battery packs of these vehicles. An integral part of a battery pack is the battery tray, which constitutes a significant portion of the pack’s overall weight. Based on the functional requirements, cost targets, and packaging space available, a range of materials –from metals, composites, and plastics– are often used to develop these battery trays. This paper considers the design and development of integrated thermoplastic-intensive battery trays, using the available packaging space from a representative EV battery pack. Presented as a proposed alternative are multiple concepts to integrate several connected systems such as cooling plates and underbody impact protection parts of a multi-piece incumbent battery pack. The resulting digital prototype was evaluated for several mechanical performance measures such as mechanical shock, drop, crush resistance, modal analysis, and torsional stiffness. The performance of this alternative design is then compared with the incumbent solution. In addition, insights are gleaned into how these novel approaches can be optimized to meet or exceed the performance of incumbent designs. Preliminary manufacturing feasibility of the optimal solution using injection molding and other commonly used manufacturing methods for thermoplastics is briefly explained. Then numerical and analytical evaluations are performed to show a representative Pareto front of cost vs. volume of the production parts. The proposed solution is observed to offer weight savings of up to 40% on a component level and part elimination of up to two systems in the battery pack of a typical battery EV while offering the potential to meet the required performance measures highlighted above. These conceptual solutions are also observed to potentially offer secondary benefits such as improved thermal and electrical isolations and be able to achieve complex geometrical features, thus demonstrating the ability to use the complete packaging space available in the vehicle platform considered. The detailed study presented in this paper serves as a valuable reference for researches across the globe working on the development of EV battery packs – especially those with an interest in the potential of employing alternate solutions as part of a mixed-material system to help capture untapped opportunities to optimize performance and meet critical application requirements.Keywords: thermoplastics, lightweighting, part integration, electric vehicle battery packs
Procedia PDF Downloads 20553 A Clinical Cutoff to Identify Metabolically Unhealthy Obese and Normal-Weight Phenotype in Young Adults
Authors: Lívia Pinheiro Carvalho, Luciana Di Thommazo-Luporini, Rafael Luís Luporini, José Carlos Bonjorno Junior, Renata Pedrolongo Basso Vanelli, Manoel Carneiro de Oliveira Junior, Rodolfo de Paula Vieira, Renata Trimer, Renata G. Mendes, Mylène Aubertin-Leheudre, Audrey Borghi-Silva
Abstract:
Rationale: Cardiorespiratory fitness (CRF) and functional capacity in young obese and normal-weight people are associated with metabolic and cardiovascular diseases and mortality. However, it remains unclear whether their metabolically healthy (MH) or at risk (AR) phenotype influences cardiorespiratory fitness in this vulnerable population such as obese adults but also in normal-weight people. HOMA insulin resistance index (HI) and leptin-adiponectin ratio (LA) are strong markers for characterizing those phenotypes that we hypothesized to be associated with physical fitness. We also hypothesized that an easy and feasible exercise test could identify a subpopulation at risk to develop metabolic and related disorders. Methods: Thirty-nine sedentary men and women (20-45y; 18.5Keywords: aerobic capacity, exercise, fitness, metabolism, obesity, 6MST
Procedia PDF Downloads 35352 Augusto De Campos Translator: The Role of Translation in Brazilian Concrete Poetry Project
Authors: Juliana C. Salvadori, Jose Carlos Felix
Abstract:
This paper aims at discussing the role literary translation has played in Brazilian Concrete Poetry Movement – an aesthetic, critical and pedagogical project which conceived translation as poiesis, i.e., as both creative and critic work in which the potency (dynamic) of literary work is unfolded in the interpretive and critic act (energeia) the translating practice demands. We argue that translation, for concrete poets, is conceived within the framework provided by the reinterpretation –or deglutition– of Oswald de Andrade’s anthropophagy – a carefully selected feast from which the poets pick and model their Paideuma. As a case study, we propose to approach and analyze two of Augusto de Campos’s long-term translation projects: the translation of Emily Dickinson’s and E. E. Cummings’s works to Brazilian readers. Augusto de Campos is a renowned poet, translator, critic and one of the founding members of Brazilian Concrete Poetry movement. Since the 1950s he has produced a consistent body of translated poetry from English-speaking poets in which the translator has explored creative translation processes – transcreation, as concrete poets have named it. Campos’s translation project regarding E. E. Cummings’s poetry comprehends a span of forty years: it begins in 1956 with 10 poems and unfolds in 4 works – 20 poem(a)s, 40 poem(a)s, Poem(a)s, re-edited in 2011. His translations of Dickinson’s poetry are published in two works: O Anticrítico (1986), in which he translated 10 poems, and Emily Dickinson Não sou Ninguém (2008), in which the poet-translator added 35 more translated poems. Both projects feature bilingual editions: contrary to common sense, Campos translations aim at being read as such: the target readers, to fully enjoy the experience, must be proficient readers of English and, also, acquainted with the poets in translation – Campos expects us to perform translation criticism, as Antoine Berman has proposed, by assessing the choices he, as both translator and poet, has presented in order to privilege aesthetic information (verse lines, word games, etc.). To readers not proficient in English, his translations play a pedagogycal role of educating and preparing them to read both the target poet works as well as concrete poetry works – the detailed essays and prefaces in which the translator emphasizes the selection of works translated and strategies adopted enlighten his project as translator: for Cummings, it has led to the oblieraton of the more traditional and lyrical/romantic examples of his poetry while highlighting the more experimental aspects and poems; for Dickinson, his project has highligthed the more hermetic traits of her poems. To the domestic canons of both poets in Brazilian literary system, we analyze Campos’ contribution in this work.Keywords: translation criticism, Augusto de Campos, E. E. Cummings, Emily Dickinson
Procedia PDF Downloads 29551 Exploring Antimicrobial Resistance in the Lung Microbial Community Using Unsupervised Machine Learning
Authors: Camilo Cerda Sarabia, Fernanda Bravo Cornejo, Diego Santibanez Oyarce, Hugo Osses Prado, Esteban Gómez Terán, Belén Diaz Diaz, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Antimicrobial resistance (AMR) represents a significant and rapidly escalating global health threat. Projections estimate that by 2050, AMR infections could claim up to 10 million lives annually. Respiratory infections, in particular, pose a severe risk not only to individual patients but also to the broader public health system. Despite the alarming rise in resistant respiratory infections, AMR within the lung microbiome (microbial community) remains underexplored and poorly characterized. The lungs, as a complex and dynamic microbial environment, host diverse communities of microorganisms whose interactions and resistance mechanisms are not fully understood. Unlike studies that focus on individual genomes, analyzing the entire microbiome provides a comprehensive perspective on microbial interactions, resistance gene transfer, and community dynamics, which are crucial for understanding AMR. However, this holistic approach introduces significant computational challenges and exposes the limitations of traditional analytical methods such as the difficulty of identifying the AMR. Machine learning has emerged as a powerful tool to overcome these challenges, offering the ability to analyze complex genomic data and uncover novel insights into AMR that might be overlooked by conventional approaches. This study investigates microbial resistance within the lung microbiome using unsupervised machine learning approaches to uncover resistance patterns and potential clinical associations. it downloaded and selected lung microbiome data from HumanMetagenomeDB based on metadata characteristics such as relevant clinical information, patient demographics, environmental factors, and sample collection methods. The metadata was further complemented by details on antibiotic usage, disease status, and other relevant descriptions. The sequencing data underwent stringent quality control, followed by a functional profiling focus on identifying resistance genes through specialized databases like Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. Subsequent analyses employed unsupervised machine learning techniques to unravel the structure and diversity of resistomes in the microbial community. Some of the methods employed were clustering methods such as K-Means and Hierarchical Clustering enabled the identification of sample groups based on their resistance gene profiles. The work was implemented in python, leveraging a range of libraries such as biopython for biological sequence manipulation, NumPy for numerical operations, Scikit-learn for machine learning, Matplotlib for data visualization and Pandas for data manipulation. The findings from this study provide insights into the distribution and dynamics of antimicrobial resistance within the lung microbiome. By leveraging unsupervised machine learning, we identified novel resistance patterns and potential drivers within the microbial community.Keywords: antibiotic resistance, microbial community, unsupervised machine learning., sequences of AMR gene
Procedia PDF Downloads 23