Search results for: Jaime León
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 122

Search results for: Jaime León

62 Investigation on the Capacitive Deionization of Functionalized Carbon Nanotubes (F-CNTs) and Silver-Decorated F-CNTs for Water Softening

Authors: Khrizelle Angelique Sablan, Rizalinda De Leon, Jaeyoung Lee, Joey Ocon

Abstract:

The impending water shortage drives us to find alternative sources of water. One of the possible solutions is desalination of seawater. There are numerous processes by which it can be done and one if which is capacitive deionization. Capacitive deionization is a relatively new technique for water desalination. It utilizes the electric double layer for ion adsorption. Carbon-based materials are commonly used as electrodes for capacitive deionization. In this study, carbon nanotubes (CNTs) were treated in a mixture of nitric and sulfuric acid. The silver addition was also facilitated to incorporate antimicrobial action. The acid-treated carbon nanotubes (f-CNTs) and silver-decorated f-CNTs (Ag@f-CNTs) were used as electrode materials for seawater deionization and compared with CNT and acid-treated CNT. The synthesized materials were characterized using TEM, EDS, XRD, XPS and BET. The electrochemical performance was evaluated using cyclic voltammetry, and the deionization performance was tested on a single cell with water containing 64mg/L NaCl. The results showed that the synthesized Ag@f-CNT-10 H could have better performance than CNT and a-CNT with a maximum ion removal efficiency of 50.22% and a corresponding adsorption capacity of 3.21 mg/g. It also showed antimicrobial activity against E. coli. However, the said material lacks stability as the efficiency decreases with repeated usage of the electrode.

Keywords: capacitive deionization, carbon nanotubes, desalination, acid functionalization, silver

Procedia PDF Downloads 202
61 Implementation of Lean Manufacturing in Some Companies in Colombia: A Case Study

Authors: Natalia Marulanda, Henry González, Gonzalo León, Alejandro Hincapié

Abstract:

Continuous improvement tools are the result of a set of studies that developed theories and methodologies. These methodologies enable organizations to increase their levels of efficiency, effectiveness, and productivity. Based on these methodologies, lean manufacturing philosophy, which is based on the optimization of resources, waste disposal, and generation of value to products and services, was developed. Lean application has been massive globally, but Colombian companies have been made it incipiently. Therefore, the purpose of this article is to identify the impacts generated by the implementation of lean manufacturing tools in five companies located in Colombia and Medellín metropolitan area. It also seeks to make a comparison of the results obtained from the implementation of lean philosophy and Theory of Constraints. The methodology is qualitative and quantitative, is based on the case study interview from dialogue with the leaders of the processes that used lean tools. The most used tools by research companies are 5's with 100% and TPM with 80%. The less used tool is the synchronous production with 20%. The main reason for the implementation of lean was supply chain management with 83.3%. For the application of lean and TOC, we did not find significant differences between the impact, in terms of methodology, areas of application, staff initiatives, supply chain management, planning, and training.

Keywords: business strategy, lean manufacturing, theory of constraints, supply chain

Procedia PDF Downloads 321
60 Synthesis of Modified Cellulose for the Capture of Uranyl Ions from Aqueous Solutions

Authors: Claudia Vergara, Oscar Valdes, Jaime Tapia, Leonardo Santos

Abstract:

The poly(amidoamine) dendrimers (PAMAM) are a class of material introduced by D. Tomalia. Modifications of the PAMAM dendrimer with several functional groups have attracted the attention for new interesting properties and new applications in many fields such as chemistry, physics, biology, and medicine. However, in the last few years, the use of dendrimers in environmental applications has increased due to pollution concerns. In this contribution, we report the synthesis of three new PAMAM derivates modified with asparagine aminoacid supported in cellulose: PG0-Asn (PAMAM-asparagine), PG0-Asn-Trt (with trityl group) and PG0-Asn-Boc-Trt (with tert-butyl oxycarbonyl group). The functionalization of generation 0 PAMAM dendrimer was carried out by amidation reaction by using an EDC/HOBt protocol. In a second step, functionalized dendrimer was covalently supported to the cellulose surface and used to study the capture of uranyl ions from aqueous solution by fluorescence spectroscopy. The structure and purity of the desired products were confirmed by conventional techniques such as FT-IR, MALDI, elemental analysis, and ESI-MS. Batch experiments were carried out to determine the affinity of uranyl ions with the dendrimer in aqueous solution. Firstly, the optimal conditions for uranyl capture were obtained, where the optimum pH for the removal was 6, the contact time was 4 hours, the initial concentration of uranyl was 100 ppm, and the amount of the adsorbent to be used was 2.5 mg. PAMAM significantly increased the capture of uranyl ions with respect to cellulose as the starting substrate, reaching 94.8% of capture (PG0), followed by 91.2% corresponding to PG0-Asn-Trt, then 70.3% PG0-Asn and 24.2% PG0-Asn-Boc-Trt. These results show that the PAMAM dendrimer is a good option to remove uranyl ions from aqueous solutions.

Keywords: asparagine, cellulose, PAMAM dendrimer, uranyl ions

Procedia PDF Downloads 114
59 Mixed Effects Models for Short-Term Load Forecasting for the Spanish Regions: Castilla-Leon, Castilla-La Mancha and Andalucia

Authors: C. Senabre, S. Valero, M. Lopez, E. Velasco, M. Sanchez

Abstract:

This paper focuses on an application of linear mixed models to short-term load forecasting. The challenge of this research is to improve a currently working model at the Spanish Transport System Operator, programmed by us, and based on linear autoregressive techniques and neural networks. The forecasting system currently forecasts each of the regions within the Spanish grid separately, even though the behavior of the load in each region is affected by the same factors in a similar way. A load forecasting system has been verified in this work by using the real data from a utility. In this research it has been used an integration of several regions into a linear mixed model as starting point to obtain the information from other regions. Firstly, the systems to learn general behaviors present in all regions, and secondly, it is identified individual deviation in each regions. The technique can be especially useful when modeling the effect of special days with scarce information from the past. The three most relevant regions of the system have been used to test the model, focusing on special day and improving the performance of both currently working models used as benchmark. A range of comparisons with different forecasting models has been conducted. The forecasting results demonstrate the superiority of the proposed methodology.

Keywords: short-term load forecasting, mixed effects models, neural networks, mixed effects models

Procedia PDF Downloads 162
58 Towards the Enhancement of Thermoelectric Properties by Controlling the Thermoelectrical Nature of Grain Boundaries in Polycrystalline Materials

Authors: Angel Fabian Mijangos, Jaime Alvarez Quintana

Abstract:

Waste heat occurs in many areas of daily life because world’s energy consumption is inefficient. In general, generating 1 watt of power requires about 3 watt of energy input and involves dumping into the environment the equivalent of about 2 watts of power in the form of heat. Therefore, an attractive and sustainable solution to the energy problem would be the development of highly efficient thermoelectric devices which could help to recover this waste heat. This work presents the influence on the thermoelectric properties of metallic, semiconducting, and dielectric nanoparticles added into the grain boundaries of polycrystalline antimony (Sb) and bismuth (Bi) matrixes in order to obtain p- and n-type thermoelectric materials, respectively, by hot pressing methods. Results show that thermoelectric properties are significantly affected by the electrical and thermal nature as well as concentration of nanoparticles. Nevertheless, by optimizing the amount of the nanoparticles on the grain boundaries, an oscillatory behavior in ZT as function of the concentration of the nanoscale constituents is present. This effect is due to energy filtering mechanism which module the quantity of charge transport in the system and affects thermoelectric properties. Accordingly, a ZTmax can be accomplished through the addition of the appropriate amount of nanoparticles into the grain boundaries region. In this case, till three orders of amelioration on ZT is reached in both systems compared with the reference sample of each one. This approach paves the way to pursuit high performance thermoelectric materials in a simple way and opens a new route towards the enhancement of the thermoelectric figure of merit.

Keywords: energy filtering, grain boundaries, thermoelectric, nanostructured materials

Procedia PDF Downloads 227
57 Academic Influence of Social Network Sites on the Collegiate Performance of Technical College Students

Authors: Jameson McFarlane, Thorne J. McFarlane, Leon Bernard

Abstract:

Social network sites (SNS) is an emerging phenomenon that is here to stay. The popularity and the ubiquity of the SNS technology are undeniable. Because most SNS are free and easy to use people from all walks of life and from almost any age are attracted to that technology. College age students are by far the largest segment of the population using SNS. Since most SNS have been adapted for mobile devices, not only do you find students using this technology in their study, while working on labs or on projects, a substantial number of students have been found to use SNS even while listening to lectures. This study found that SNS use has a significant negative impact on the grade point average of college students particularly in the first semester. However, this negative impact is greatly diminished by the end of the third semester partly because the students have adjusted satisfactorily to the challenges of college or because they have learned how to adequately manage their time. It was established that the kinds of activities the students are engaged in during the SNS use are the leading factor affecting academic performance. Of those activities, using SNS during a lecture or while studying is the foremost contributing factor to lower academic performance. This is due to “cognitive” or “information” bottleneck, a condition in which the students find it very difficult to multitask or to switch between resources leading to inefficiency in information retention and thus, educational performance.

Keywords: social network sites, social network analysis, regression coefficient, psychological engagement

Procedia PDF Downloads 160
56 Characterization of Shiga Toxin Escherichia coli Recovered from a Beef Processing Facility within Southern Ontario and Comparative Performance of Molecular Diagnostic Platforms

Authors: Jessica C. Bannon, Cleso M. Jordao Jr., Mohammad Melebari, Carlos Leon-Velarde, Roger Johnson, Keith Warriner

Abstract:

There has been an increased incidence of non-O157 Shiga Toxin Escherichia coli (STEC) with six serotypes (Top 6) being implicated in causing haemolytic uremic syndrome (HUS). Beef has been suggested to be a significant vehicle for non-O157 STEC although conclusive evidence has yet to be obtained. The following aimed to determine the prevalence of the Top 6 non-O157 STEC in beef processing using three different diagnostic platforms then characterize the recovered isolates. Hide, carcass and environmental swab samples (n = 60) were collected from a beef processing facility over a 12 month period. Enriched samples were screened using Biocontrol GDS, BAX or PALLgene molecular diagnostic tests. Presumptive non-O157 STEC positive samples were confirmed using conventional PCR and serology. STEC was detected by GDS (55% positive), BAX (85% positive), and PALLgene (93%). However, during confirmation testing only 8 of the 60 samples (13%) were found to harbour STEC. Interestingly, the presence of virulence factors in the recovered isolates was unstable and readily lost during subsequent sub-culturing. There is a low prevalence of Top 6 non-O157 STEC associated with beef although other serotypes are encountered. Yet, the instability of the virulence factors in recovered strains would question their clinical relevance.

Keywords: beef, food microbiology, shiga toxin, STEC

Procedia PDF Downloads 438
55 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment

Authors: Leon Pan

Abstract:

The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort in their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model — aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.

Keywords: extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning

Procedia PDF Downloads 33
54 Autonomous Kuka Youbot Navigation Based on Machine Learning and Path Planning

Authors: Carlos Gordon, Patricio Encalada, Henry Lema, Diego Leon, Dennis Chicaiza

Abstract:

The following work presents a proposal of autonomous navigation of mobile robots implemented in an omnidirectional robot Kuka Youbot. We have been able to perform the integration of robotic operative system (ROS) and machine learning algorithms. ROS mainly provides two distributions; ROS hydro and ROS Kinect. ROS hydro allows managing the nodes of odometry, kinematics, and path planning with statistical and probabilistic, global and local algorithms based on Adaptive Monte Carlo Localization (AMCL) and Dijkstra. Meanwhile, ROS Kinect is responsible for the detection block of dynamic objects which can be in the points of the planned trajectory obstructing the path of Kuka Youbot. The detection is managed by artificial vision module under a trained neural network based on the single shot multibox detector system (SSD), where the main dynamic objects for detection are human beings and domestic animals among other objects. When the objects are detected, the system modifies the trajectory or wait for the decision of the dynamic obstacle. Finally, the obstacles are skipped from the planned trajectory, and the Kuka Youbot can reach its goal thanks to the machine learning algorithms.

Keywords: autonomous navigation, machine learning, path planning, robotic operative system, open source computer vision library

Procedia PDF Downloads 150
53 Outcomes in New-Onset Diabetic Foot Ulcers Stratified by Etiology

Authors: Pedro Gomes, Lia Ferreira, Sofia Garcia, Jaime Babulal, Luís Costa, Luís Castelo, José Muras, Isabel Gonçalves, Rui Carvalho

Abstract:

Introduction: Foot ulcers and their complications are an important cause of morbidity and mortality in diabetes. Objectives: The present study aims to evaluate the outcomes in terms of need for hospitalization, amputation, healing time and mortality in patients with new-onset diabetic foot ulcers in subgroups stratified by etiology. Methods: A retrospective study based on clinical assessment of patients presenting with new ulcers to a multidisciplinary diabetic foot consult during 2012. Outcomes were determined until September 2014, from hospital registers. Baseline clinical examination was done to classify ulcers as neuropathic, ischemic or neuroischemic. Results: 487 patients with new diabetic foot ulcers were observed; 36%, 15% and 49% of patients had neuropathic, ischemic and neuroischemic ulcers, respectively. For analysis, patients were classified as having predominantly neuropathic (36%) or ischemic foot (64%). The mean age was significantly higher in the group with ischemic foot (70±12 vs 63±12 years; p <0.001), as well as the duration of diabetes (18±10 vs 16 ± 10years, p <0.05). A history of previous amputation was also significantly higher in this group (24.7% vs 15.6%, p <0.05). The evolution of ischemic ulcers was significantly worse, with a greater need for hospitalization (27.2% vs 18%, p <0.05), amputation (11.5% vs 3.6% p <0.05) mainly major amputation (3% vs. 0%; p <0.001) and higher mean healing time (151 days vs 89 days, p <0.05). The mortality rate at 18 months, was also significantly higher in the ischemic foot group (7.3% vs 1.8%, p <0.05). Conclusions: All types of diabetic foot ulcers are associated with high morbidity and mortality, however, the presence of arterial disease confers a poor prognosis. Diabetic foot can be successfully treated only by the multidisciplinary team which can provide more comprehensive and integrated care.

Keywords: diabetes, foot ulcers, etiology, outcome

Procedia PDF Downloads 404
52 Electrospun Fibers Made from Biopolymers (Cellulose Acetate/Chitosan) for Metals Recovery

Authors: Mauricio Gómez, Esmeralda López, Ian Becar, Jaime Pizarro, Paula A. Zapata

Abstract:

A biodegradable material is developed with adsorptive capacity for metals ion for intended use in mining tailings mitigating the environmental impact with economic retribution, two types of fibers were elaborated by electrospinning: (1) a cellulose acetate (CA) matrix and (2) a cellulose acetate (CA)/chitosan (CH) matrix evaluating the effect of CH in CA on its physicochemical properties. Through diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) the incorporation of chitosan in the matrix was identified, observing the band of the amino group at 1500 - 1600 [cm-1]. By scanning electron microscopy (SEM), Hg porosimetry, and CO2 isotherm at 273 [K], the intrafiber microporosity and interfiber macroporosity were identified, with an increase in the distribution of macropores for CA/CH fibers. In the tensile test, CH into the matrix produces a more ductile and tenacious behavior, where the % elongation at break increased by 33% with the other parameters constant. Thermal analysis by differential scanning calorimetry (DSC) and Thermogravimetric Analysis (TGA) showed that the incorporation of chitosan produces higher retention of water molecules due to the functional groups (amino groups (- NH3)), but there is a decrease in the specific heat and thermoplastic properties of the matrix since the glass transition temperature and softening temperature disappear. The effect of the optimum pH for CA and CA/CH fibers were studied in a batch system. In the adsorption kinetic study, the best isotherm model adapted to the experimental results corresponds to the Sips model and the kinetics corresponds to pseudo-second order

Keywords: environmental materials, wastewater treatment, electrospun fibers, biopolymers (cellulose acetate/chitosan), metals recovery

Procedia PDF Downloads 52
51 Valorization of the Waste Generated in Building Energy-Efficiency Rehabilitation Works as Raw Materials for Gypsum Composites

Authors: Paola Villoria Saez, Mercedes Del Rio Merino, Jaime Santacruz Astorqui, Cesar Porras Amores

Abstract:

In construction the Circular Economy covers the whole cycle of the building construction: from production and consumption to waste management and the market for secondary raw materials. The circular economy will definitely contribute to 'closing the loop' of construction product lifecycles through greater recycling and re-use, helping to build a market for reused construction materials salvaged from demolition sites, boosting global competitiveness and fostering sustainable economic growth. In this context, this paper presents the latest research of 'Waste to resources (W2R)' project funded by the Spanish Government, which seeks new solutions to improve energy efficiency in buildings by developing new building materials and products that are less expensive, more durable, with higher quality and more environmentally friendly. This project differs from others as its main objective is to reduce to almost zero the Construction and Demolition Waste (CDW) generated in building rehabilitation works. In order to achieve this objective, the group is looking for new ways of CDW recycling as raw materials for new conglomerate materials. With these new materials, construction elements reducing building energy consumption will be proposed. In this paper, the results obtained in the project are presented. Several tests were performed to gypsum samples containing different percentages of CDW waste generated in Spanish building retroffiting works. Results were further analyzed and one of the gypsum composites was highlighted and discussed. Acknowledgements: This research was supported by the Spanish State Secretariat for Research, Development and Innovation of the Ministry of Economy and Competitiveness under 'Waste 2 Resources' Project (BIA2013-43061-R).

Keywords: building waste, CDW, gypsum, recycling, resources

Procedia PDF Downloads 304
50 Use of Geosynthetics as Reinforcement Elements in Unpaved Tertiary Roads

Authors: Vivian A. Galindo, Maria C. Galvis, Jaime R. Obando, Alvaro Guarin

Abstract:

In Colombia, most of the roads of the national tertiary road network are unpaved roads with granular rolling surface. These are very important ways of guaranteeing the mobility of people, products, and inputs from the agricultural sector from the most remote areas to urban centers; however, it has not paid much attention to the search for alternatives to avoid the occurrence of deteriorations that occur shortly after its commissioning. In recent years, geosynthetics have been used satisfactorily to reinforce unpaved roads on soft soils, with geotextiles and geogrids being the most widely used. The interaction of the geogrid and the aggregate minimizes the lateral movement of the aggregate particles and increases the load capacity of the material, which leads to a better distribution of the vertical stresses, consequently reducing the vertical deformations in the subgrade. Taking into account the above, the research aimed at the mechanical behavior of the granular material, used in unpaved roads with and without the presence of geogrids, from the development of laboratory tests through the loaded wheel tester (LWT). For comparison purposes, the reinforced conditions and traffic conditions to which this type of material can be accessed in practice were simulated. In total four types of geogrids, were tested with granular material; this means that five test sets, the reinforced material and the non-reinforced control sample were evaluated. The results of the numbers of load cycles and depth rutting supported by each test body showed the influence of the properties of the reinforcement on the mechanical behavior of the assembly and the significant increases in the number of load cycles of the reinforced specimens in relation to those without reinforcement.

Keywords: geosynthetics, load wheel tester LWT, tertiary roads, unpaved road, vertical deformation

Procedia PDF Downloads 216
49 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling

Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon

Abstract:

This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.

Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester

Procedia PDF Downloads 101
48 Experimental Investigations on the Mechanical properties of Spiny (Kawayan Tinik) Bamboo Layers

Authors: Ma. Doreen E. Candelaria, Ma. Louise Margaret A. Ramos, Dr. Jaime Y. Hernandez, Jr

Abstract:

Bamboo has been introduced as a possible alternative to some construction materials nowadays. Its potential use in the field of engineering, however, is still not widely practiced due to insufficient engineering knowledge on the material’s properties and characteristics. Although there are researches and studies proving its advantages, it is still not enough to say that bamboo can sustain and provide the strength and capacity required of common structures. In line with this, a more detailed analysis was made to observe the layered structure of the bamboo, particularly the species of Kawayan Tinik. It is the main intent of this research to provide the necessary experiments to determine the tensile strength of dried bamboo samples. The test includes tensile strength parallel to fibers with samples taken at internodes only. Throughout the experiment, methods suggested by the International Organization for Standardization (ISO) were followed. The specimens were tested using 3366 INSTRON Universal Testing Machine, with a rate of loading set to 0.6 mm/min. It was then observed from the results of these experiments that dried bamboo samples recorded high layered tensile strengths, as high as 600 MPa. Likewise, along the culm’s length and across its cross section, higher tensile strength were observed at the top part and at its outer layers. Overall, the top part recorded the highest tensile strength per layer, with its outer layers having tensile strength as high as 600 MPa. The recorded tensile strength of its middle and inner layers, on the other hand, were approximately 450 MPa and 180 MPa, respectively. From this variation in tensile strength across the cross section, it may be concluded that an increase in tensile strength may be observed towards the outer periphery of the bamboo. With these preliminary investigations on the layered tensile strength of bamboo, it is highly recommended to conduct experimental investigations on the layered compressive strength properties as well. It is also suggested to conduct investigations evaluating perpendicular layered tensile strength of the material.

Keywords: bamboo strength, layered strength tests, strength test, tensile test

Procedia PDF Downloads 384
47 The Future of Hospitals: A Systematic Review in the Field of Architectural Design with a Disruptive Research and Development Approach

Authors: María Araya Léon, Ainoa Abella, Aura Murillo, Ricardo Guasch, Laura Clèries

Abstract:

Objectives: This article aims to examine scientific theory framed within the term hospitals of the future from a multidisciplinary and cross-sectional perspective. To understand the connection that the various cross-sectional areas we studied have with architectural spaces and to determine the future outlook of the works examined and how they can be classified into the categories of need/solution, evolution/revolution, collective/individual, and preventive/corrective. Background: The changes currently taking place within the context of healthcare demonstrate how important these projects are and the need for companies to face future changes. Method: A systematic review has been carried out focused on what will the hospitals of the future be like in relation to the elements that form part of their use, design, and architectural space experience, using the WOS database from 2016 to 2019. Results: The large number of works about sensoring & big data and the scarce amount related to the area of materials is worth highlighting. Furthermore, no growth concerning future issues is envisaged over time. Regarding classifications, the articles we reviewed address evolutionary and collective solutions more, and in terms of preventive and corrective solutions, they were found at a similar level. Conclusions: Although our research focused on the future of hospitals, there is little evidence representing this approach. We also detected that, given the relevance of the research on how the built environment influences human health and well-being, these studies should be promoted within the context of healthcare.

Keywords: hospitals, future, architectural space, disruptive approach

Procedia PDF Downloads 59
46 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 271
45 The Quasar 3C 47:Extreme Population B Jetted Source with Double-Peaked Profile

Authors: Shimeles Terefe Mengistue, Paola Marziani, Ascensióndel Olmo, Jaime Perea, Mirjana Pović

Abstract:

The theory that rotating accretion disks are responsible for the broad emission-line profiles in quasars is frequently put forth; however, the presence of accretion disk (AD) in active galactic nuclei (AGN) had limited and indirect observational support. In order to evaluate the extent to which the AD is a source of the broad Balmer lines and high ionization UV lines in radio-loud (RL) AGN, we focused on an extremely jetted RL quasar, 3C 47 that clearly shows a double peaked profile. This work presents its optical spectra and UV observations from the HST/FOS covering the rest-frame spectral range from 2000 to 7000 \AA. The fit of the low ionization lines, Hbeta, Halpha and MgII2800 show profiles that are in very good agreement with a relativistic Keplerian AD model. The profile of the prototypical high ionization lines can also be modeled by the contribution of the AD, with additional components due to outflows and emissions from the innermost part of the narrow line regions (NLRs). A prominent fit of the resulting double peaked profiles were found and very important disk parameters of the disk have been determined using the Hbeta, Halpha and MgII2800 lines: the inner and outer radii (both in units of G/mbh, where mbh is the supermassive black hole), an inclination to the line of sight, the emissivity index and the local broadening parameter. In addition, the accretion parameters, /mbh and /lledd are also determined. This work indicates that the line profile of 3C 47 shows the most convincing direct evidence for the presence of a rotating AD in AGN and the broad, double-peaked profiles originate from this AD that surrounds an /mbh.

Keywords: active galactic nuclei, quasars, emission lines, Double-peaked, supermassive black hole

Procedia PDF Downloads 47
44 Second Language Development with an Intercultural Approach: A Pilot Program Applied to Higher Education Students from a Escuela Normal in Atequiza, Mexico

Authors: Frida C. Jaime Franco, C. Paulina Navarro Núñez, R. Jacob Sánchez Nájera

Abstract:

The importance of developing multi-language abilities in our global society is noteworthy. However, the necessity, interest, and consciousness of the significance that the development of another language represents, apart from the mother tongue, is not always the same in all contexts as it is in multicultural communities, especially in rural higher education institutions immersed in small communities. Leading opportunities for digital interaction among learners from Mexico and abroad partners represents scaffolding towards, not only language skills development but also intercultural communicative competences (ICC). This study leads us to consider what should be the best approach to work while applying a program of ICC integrated into the practice of EFL. While analyzing the roots of the language, it is possible to obtain the main objective of learning another language, to communicate with a functional purpose, as well as attaching social practices to the learning process, giving a result of functionality and significance to the target language. Hence, the collateral impact that collaborative learning leads to, aims to contribute to a better global understanding as well as a means of self and other cultural awareness through intercultural communication. While communicating through the target language by online collaboration among students in platforms of long-distance communication, language is used as a tool of interaction to broaden students’ perspectives reaching a substantial improvement with the help of their differences. This process should consider the application of the target language in the inquiry of sociocultural information, expecting the learners to integrate communicative skills to handle cultural differentiation at the same time they apply the knowledge of their target language in a real scenario of communication, despite being through virtual resources.

Keywords: collaborative learning, communicative approach, culture, interaction, interculturalism, target language, virtual partnership

Procedia PDF Downloads 104
43 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 276
42 An Overview of Technology Availability to Support Remote Decentralized Clinical Trials

Authors: Simone Huber, Bianca Schnalzer, Baptiste Alcalde, Sten Hanke, Lampros Mpaltadoros, Thanos G. Stavropoulos, Spiros Nikolopoulos, Ioannis Kompatsiaris, Lina Pérez- Breva, Vallivana Rodrigo-Casares, Jaime Fons-Martínez, Jeroen de Bruin

Abstract:

Developing new medicine and health solutions and improving patient health currently rely on the successful execution of clinical trials, which generate relevant safety and efficacy data. For their success, recruitment and retention of participants are some of the most challenging aspects of protocol adherence. Main barriers include: i) lack of awareness of clinical trials; ii) long distance from the clinical site; iii) the burden on participants, including the duration and number of clinical visits and iv) high dropout rate. Most of these aspects could be addressed with a new paradigm, namely the Remote Decentralized Clinical Trials (RDCTs). Furthermore, the COVID-19 pandemic has highlighted additional advantages and challenges for RDCTs in practice, allowing participants to join trials from home and not depend on site visits, etc. Nevertheless, RDCTs should follow the process and the quality assurance of conventional clinical trials, which involve several processes. For each part of the trial, the Building Blocks, existing software and technologies were assessed through a systematic search. The technology needed to perform RDCTs is widely available and validated but is yet segmented and developed in silos, as different software solutions address different parts of the trial and at various levels. The current paper is analyzing the availability of technology to perform RDCTs, identifying gaps and providing an overview of Basic Building Blocks and functionalities that need to be covered to support the described processes.

Keywords: architectures and frameworks for health informatics systems, clinical trials, information and communications technology, remote decentralized clinical trials, technology availability

Procedia PDF Downloads 181
41 Development of a Matlab® Program for the Bi-Dimensional Truss Analysis Using the Stiffness Matrix Method

Authors: Angel G. De Leon Hernandez

Abstract:

A structure is defined as a physical system or, in certain cases, an arrangement of connected elements, capable of bearing certain loads. The structures are presented in every part of the daily life, e.g., in the designing of buildings, vehicles and mechanisms. The main goal of a structure designer is to develop a secure, aesthetic and maintainable system, considering the constraint imposed to every case. With the advances in the technology during the last decades, the capabilities of solving engineering problems have increased enormously. Nowadays the computers, play a critical roll in the structural analysis, pitifully, for university students the vast majority of these software are inaccessible due to the high complexity and cost they represent, even when the software manufacturers offer student versions. This is exactly the reason why the idea of developing a more reachable and easy-to-use computing tool. This program is designed as a tool for the university students enrolled in courser related to the structures analysis and designs, as a complementary instrument to achieve a better understanding of this area and to avoid all the tedious calculations. Also, the program can be useful for graduated engineers in the field of structural design and analysis. A graphical user interphase is included in the program to make it even simpler to operate it and understand the information requested and the obtained results. In the present document are included the theoretical basics in which the program is based to solve the structural analysis, the logical path followed in order to develop the program, the theoretical results, a discussion about the results and the validation of those results.

Keywords: stiffness matrix method, structural analysis, Matlab® applications, programming

Procedia PDF Downloads 97
40 Scoping Review of Biological Age Measurement Composed of Biomarkers

Authors: Diego Alejandro Espíndola-Fernández, Ana María Posada-Cano, Dagnóvar Aristizábal-Ocampo, Jaime Alberto Gallo-Villegas

Abstract:

Background: With the increase in life expectancy, aging has been subject of frequent research, and therefore multiple strategies have been proposed to quantify the advance of the years based on the known physiology of human senescence. For several decades, attempts have been made to characterize these changes through the concept of biological age, which aims to integrate, in a measure of time, structural or functional variation through biomarkers in comparison with simple chronological age. The objective of this scoping review is to deepen the updated concept of measuring biological age composed of biomarkers in the general population and to summarize recent evidence to identify gaps and priorities for future research. Methods: A scoping review was conducted according to the five-phase methodology developed by Arksey and O'Malley through a search of five bibliographic databases to February 2021. Original articles were included with no time or language limit that described the biological age composed of at least two biomarkers in those over 18 years of age. Results: 674 articles were identified, of which 105 were evaluated for eligibility and 65 were included with information on the measurement of biological age composed of biomarkers. Articles from 1974 of 15 nationalities were found, most observational studies, in which clinical or paraclinical biomarkers were used, and 11 different methods described for the calculation of the composite biological age were informed. The outcomes reported were the relationship with the same measured biomarkers, specified risk factors, comorbidities, physical or cognitive functionality, and mortality. Conclusions: The concept of biological age composed of biomarkers has evolved since the 1970s and multiple methods of its quantification have been described through the combination of different clinical and paraclinical variables from observational studies. Future research should consider the population characteristics, and the choice of biomarkers against the proposed outcomes to improve the understanding of aging variables to direct effective strategies for a proper approach.

Keywords: biological age, biological aging, aging, senescence, biomarker

Procedia PDF Downloads 161
39 Integrating Best Practices for Construction Waste in Quality Management Systems

Authors: Paola Villoria Sáez, Mercedes Del Río Merino, Jaime Santa Cruz Astorqui, Antonio Rodríguez Sánchez

Abstract:

The Spanish construction industry generates large volumes of waste. However, despite the legislative improvements introduced for construction and demolition waste (CDW), construction waste recycling rate remains well below other European countries and also below the target set for 2020. This situation can be due to many difficulties. i.e.: The difficulty of onsite segregation or the estimation in advance of the total amount generated. Despite these difficulties, the proper management of CDW must be one of the main aspects to be considered by the construction companies. In this sense, some large national companies are implementing Integrated Management Systems (IMS) including not only quality and safety aspects, but also environment issues. However, although this fact is a reality for large construction companies still the vast majority of companies need to adopt this trend. In short, it is common to find in small and medium enterprises a decentralized management system: A single system of quality management, another for system safety management and a third one for environmental management system (EMS). In addition, the EMSs currently used address CDW superficially and are mainly focus on other environmental concerns such as carbon emissions. Therefore, this research determines and implements a specific best practice management system for CDW based on eight procedures in a Spanish Construction company. The main advantages and drawbacks of its implementation are highlighted. Results of this study show that establishing and implementing a CDW management system in building works, improve CDW quantification as the company obtains their own CDW generation ratio. This helps construction stakeholders when developing CDW Management Plans and also helps to achieve a higher adjustment of CDW management costs. Finally, integrating this CDW system with the EMS of the company favors the cohesion of the construction process organization at all stages, establishing responsibilities in the field of waste and providing a greater control over the process.

Keywords: construction and demolition waste, waste management, best practices, waste minimization, building, quality management systems

Procedia PDF Downloads 506
38 Effects of pH, Load Capacity and Contact Time in the Sulphate Sorption onto a Functionalized Mesoporous Structure

Authors: Jaime Pizarro, Ximena Castillo

Abstract:

The intensive use of water in agriculture, industry, human consumption and increasing pollution are factors that reduce the availability of water for future generations; the challenge is to advance in sustainable and low-cost solutions to reuse water and to facilitate the availability of the resource in quality and quantity. The use of new low-cost materials with sorbent capacity for pollutants is a solution that contributes to the improvement and expansion of water treatment and reuse systems. Fly ash, a residue from the combustion of coal in power plants that is produced in large quantities in newly industrialized countries, contains a high amount of silicon oxides and aluminum oxides, whose properties can be used for the synthesis of mesoporous materials. Properly functionalized, this material allows obtaining matrixes with high sorption capacity. The mesoporous materials have a large surface area, thermal and mechanical stability, uniform porous structure, and high sorption and functionalization capacities. The goal of this study was to develop hexagonal mesoporous siliceous material (HMS) for the adsorption of sulphate from industrial and mining waters. The silica was extracted from fly ash after calcination at 850 ° C, followed by the addition of water. The mesoporous structure has a surface area of 282 m2 g-1 and a size of 5.7 nm and was functionalized with ethylene diamine through of a self-assembly method. The material was characterized by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The capacity of sulphate sorption was evaluated according to pH, maximum load capacity and contact time. The sulphate maximum adsorption capacity was 146.1 mg g-1, which is three times higher than commercial sorbents. The kinetic data were fitted according to a pseudo-second order model with a high coefficient of linear regression at different initial concentrations. The adsorption isotherm that best fitted the experimental data was the Freundlich model.

Keywords: fly ash, mesoporous siliceous, sorption, sulphate

Procedia PDF Downloads 134
37 Eco-Fashion Dyeing of Denim and Knitwear with Particle-Dyes

Authors: Adriana Duarte, Sandra Sampaio, Catia Ferreira, Jaime I. N. R. Gomes

Abstract:

With the fashion of faded worn garments the textile industry has moved from indigo and pigments to dyes that are fixed by cationization, with products that can be toxic, and that can show this effect after washing down the dye with friction and/or treating with enzymes in a subsequent operation. Increasingly they are treated with bleaches, such as hypochlorite and permanganate, both toxic substances. An alternative process is presented in this work for both garment and jet dyeing processes, without the use of pre-cationization and the alternative use of “particle-dyes”. These are hybrid products, made up by an inorganic particle and an organic dye. With standard soluble dyes, it is not possible to avoid diffusion into the inside of the fiber unless using previous cationization. Only in this way can diffusion be avoided keeping the centre of the fibres undyed so as to produce the faded effect by removing the surface dye and showing the white fiber beneath. With “particle-dyes”, previous cationization is avoided. By applying low temperatures, the dye does not diffuse completely into the inside of the fiber, since it is a particle and not a soluble dye, being then able to give the faded effect. Even though bleaching can be used it can also be avoided, by the use of friction and enzymes they can be used just as for other dyes. This fashion brought about new ways of applying reactive dyes by the use of previous cationization of cotton, lowering the salt, and temperatures that reactive dyes usually need for reacting and as a side effect the application of a more environmental process. However, cationization is a process that can be problematic in applying it outside garment dyeing, such as jet dyeing, being difficult to obtain level dyeings. It also should be applied by a pad-fix or Pad-batch process due to the low affinity of the pre-cationization products making it a more expensive process, and the risk of unlevelness in processes such as jet dyeing. Wit particle-dyes, since no pre-cationizartion is necessary, they can be applied in jet dyeing. The excess dye is fixed by a fixing agent, fixing the insoluble dye onto the surface of the fibers. By applying the fixing agent only one to 1-3 rinses in water at room temperature are necessary, saving water and improving the washfastness.

Keywords: denim, garment dyeing, worn look, eco-fashion

Procedia PDF Downloads 511
36 Assessing an Instrument Usability: Response Interpolation and Scale Sensitivity

Authors: Betsy Ng, Seng Chee Tan, Choon Lang Quek, Peter Looker, Jaime Koh

Abstract:

The purpose of the present study was to determine the particular scale rating that stands out for an instrument. The instrument was designed to assess student perceptions of various learning environments, namely face-to-face, online and blended. The original instrument had a 5-point Likert items (1 = strongly disagree and 5 = strongly agree). Alternate versions were modified with a 6-point Likert scale and a bar scale rating. Participants consisted of undergraduates in a local university were involved in the usability testing of the instrument in an electronic setting. They were presented with the 5-point, 6-point and percentage-bar (100-point) scale ratings, in response to their perceptions of learning environments. The 5-point and 6-point Likert scales were presented in the form of radio button controls for each number, while the percentage-bar scale was presented with a sliding selection. Among these responses, 6-point Likert scale emerged to be the best overall. When participants were confronted with the 5-point items, they either chose 3 or 4, suggesting that data loss could occur due to the insensitivity of instrument. The insensitivity of instrument could be due to the discreet options, as evidenced by response interpolation. To avoid the constraint of discreet options, the percentage-bar scale rating was tested, but the participant responses were not well-interpolated. The bar scale might have provided a variety of responses without a constraint of a set of categorical options, but it seemed to reflect a lack of perceived and objective accuracy. The 6-point Likert scale was more likely to reflect a respondent’s perceived and objective accuracy as well as higher sensitivity. This finding supported the conclusion that 6-point Likert items provided a more accurate measure of the participant’s evaluation. The 5-point and bar scale ratings might not be accurately measuring the participants’ responses. This study highlighted the importance of the respondent’s perception of accuracy, respondent’s true evaluation, and the scale’s ease of use. Implications and limitations of this study were also discussed.

Keywords: usability, interpolation, sensitivity, Likert scales, accuracy

Procedia PDF Downloads 386
35 Numerical Investigation of the Integration of a Micro-Combustor with a Free Piston Stirling Engine in an Energy Recovery System

Authors: Ayodeji Sowale, Athanasios Kolios, Beatriz Fidalgo, Tosin Somorin, Aikaterini Anastasopoulou, Alison Parker, Leon Williams, Ewan McAdam, Sean Tyrrel

Abstract:

Recently, energy recovery systems are thriving and raising attention in the power generation sector, due to the request for cleaner forms of energy that are friendly and safe for the environment. This has created an avenue for cogeneration, where Combined Heat and Power (CHP) technologies have been recognised for their feasibility, and use in homes and small-scale businesses. The efficiency of combustors and the advantages of the free piston Stirling engines over other conventional engines in terms of output power and efficiency, have been observed and considered. This study presents the numerical analysis of a micro-combustor with a free piston Stirling engine in an integrated model of a Nano Membrane Toilet (NMT) unit. The NMT unit will use the micro-combustor to produce waste heat of high energy content from the combustion of human waste and the heat generated will power the free piston Stirling engine which will be connected to a linear alternator for electricity production. The thermodynamic influence of the combustor on the free piston Stirling engine was observed, based on the heat transfer from the flue gas to working gas of the free piston Stirling engine. The results showed that with an input of 25 MJ/kg of faecal matter, and flue gas temperature of 773 K from the micro-combustor, the free piston Stirling engine generates a daily output power of 428 W, at thermal efficiency of 10.7% with engine speed of 1800 rpm. An experimental investigation into the integration of the micro-combustor and free piston Stirling engine with the NMT unit is currently underway.

Keywords: free piston stirling engine, micro-combustor, nano membrane toilet, thermodynamics

Procedia PDF Downloads 230
34 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 126
33 Exploring the Discrepancy: The Influence of Instagram in Shaping Idealized Lifestyles and Self-Perceptions Among Indian University Students

Authors: Dhriti Kirpalani

Abstract:

The survey aims to explore the impact of Instagram on the perception of lifestyle aspirations (such as social life, fitness, trends followed in fashion, etc.) and perception of self in relation to an idealized lifestyle: Amidst today's media-saturated environment, university students are constantly exposed to idealized portrayals of lifestyles, often leading to unrealistic expectations and dissatisfaction with their own lives. This study investigates the impact of media on university students' perceptions of their own lifestyle, the discrepancy between their self-perception and idealized lifestyle, and their mental health. Employing a mixed-methods approach, the study combines quantitative and qualitative data collection methods to understand the issue comprehensively. A literature review was conducted in order to determine the effects of idealized lifestyle portrayal on Instagram; however, less attention has been received in the Indian setting. The researchers wish to employ a convenience sampling method among undergraduate students from India. The surveys that would be employed for quantitative analysis are Negative Social Media Comparison (NSMCS), Lifestyle Satisfaction Scale (LSS), Psychological Well-being Scale (PWB), and Self-Perception Profile for Adolescents (SPPA). The qualitative aspect would include in-depth interviews to provide deeper insights into participants' experiences and the mechanisms by which media influences their lifestyle aspirations and mental health. With the aim of being an exploratory study, the basis of the idea is found in the social comparison theory described by Leon Festinger. The findings aim to inform interventions to promote realistic expectations about lifestyle, reduce the negative effects of media on university students, and improve their mental health and well-being.

Keywords: declined self-perception, idealized lifestyle, Instagram, Indian university students, social comparison

Procedia PDF Downloads 19