Search results for: convergence process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15705

Search results for: convergence process

10785 Large-Area Film Fabrication for Perovskite Solar Cell via Scalable Thermal-Assisted and Meniscus-Guided Bar Coating

Authors: Gizachew Belay Adugna

Abstract:

Scalable and cost-effective device fabrication techniques are urgent to commercialize the perovskite solar cells (PSCs) for the next photovoltaic (PV) technology. Herein, large-area films of perovskite and hole-transporting materials (HTMs) were developed via a rapid and scalable thermal-assisting bar-coating process in the open air. High-quality and large crystalline grains of MAPbI₃ with homogenous morphology and thickness were obtained on a large-area (10 cm×10 cm) solution-sheared mp-TiO₂/c-TiO₂/FTO substrate. Encouraging photovoltaic performance of 19.02% was achieved for devices fabricated from the bar-coated perovskite film compared to that from the small-scale spin-coated film (17.27%) with 2,2′,7,7′-tetrakis-(N,N-di-p-methoxyphenylamine)-9,9′-spirobifluorene (spiro-OMeTAD) as an HTM whereas a higher power conversion efficiency of 19.89% with improved device stability was achieved by capping a fluorinated (HYC-2) HTM as an alternative to the traditional spiro-OMeTAD. The fluorinated exhibited better molecular packing in the HTM film and deeper HOMO level compared to the nonfluorinated counterpart; thus, improved hole mobility and overall charge extraction in the device were demonstrated. Furthermore, excellent film processability and an impressive PCE of 18.52% were achieved in the large area bar-coated HYC-2 prepared sequentially on the perovskite underlayer in the open atmosphere, compared to the bar-coated spiro-OMeTAD/perovskite (17.51%). This all-solution approach demonstrated the feasibility of high-quality films on a large-area substrate for PSCs, which is a vital step toward industrial-scale PV production.

Keywords: perovskite solar cells, hole transporting materials, up-scaling process, power conversion efficiency

Procedia PDF Downloads 71
10784 Electrospun Conducting Polymer/Graphene Composite Nanofibers for Gas Sensing Applications

Authors: Aliaa M. S. Salem, Soliman I. El-Hout, Amira Gaber, Hassan Nageh

Abstract:

Nowadays, the development of poisonous gas detectors is considered to be an urgent matter to secure human health and the environment from poisonous gases, in view of the fact that even a minimal amount of poisonous gas can be fatal. Of these concerns, various inorganic or organic sensing materials have been used. Among these are conducting polymers, have been used as the active material in the gassensorsdue to their low-cost,easy-controllable molding, good electrochemical properties including facile fabrication process, inherent physical properties, biocompatibility, and optical properties. Moreover, conducting polymer-based chemical sensors have an amazing advantage compared to the conventional one as structural diversity, facile functionalization, room temperature operation, and easy fabrication. However, the low selectivity and conductivity of conducting polymers motivated the doping of it with varied materials, especially graphene, to enhance the gas-sensing performance under ambient conditions. There were a number of approaches proposed for producing polymer/ graphene nanocomposites, including template-free self-assembly, hard physical template-guided synthesis, chemical, electrochemical, and electrospinning...etc. In this work, we aim to prepare a novel gas sensordepending on Electrospun nanofibers of conducting polymer/RGO composite that is the effective and efficient expectation of poisonous gases like ammonia, in different application areas such as environmental gas analysis, chemical-,automotive- and medical industries. Moreover, our ultimate objective is to maximize the sensing performance of the prepared sensor and to check its recovery properties.

Keywords: electro spinning process, conducting polymer, polyaniline, polypyrrole, polythiophene, graphene oxide, reduced graphene oxide, functionalized reduced graphene oxide, spin coating technique, gas sensors

Procedia PDF Downloads 187
10783 Social and Educational AI for Diversity: Research on Democratic Values to Develop Artificial Intelligence Tools to Guarantee Access for all to Educational Tools and Public Services

Authors: Roberto Feltrero, Sara Osuna-Acedo

Abstract:

Responsible Research and Innovation have to accomplish one fundamental aim: everybody has to participate in the benefits of innovation, but also innovation has to be democratic; that is to say, everybody may have the possibility to participate in the decisions in the innovation process. Particularly, a democratic and inclusive model of social participation and innovation includes persons with disabilities and people at risk of discrimination. Innovations on Artificial Intelligence for social development have to accomplish the same dual goal: improving equality for accessing fields of public interest like education, training and public services, as well as improving civic and democratic participation in the process of developing such innovations for all. This research aims to develop innovations, policies and policy recommendations to apply and disseminate such artificial intelligence and social model for making educational and administrative processes more accessible. First, designing a citizen participation process to engage citizens in the designing and use of artificial intelligence tools for public services. This will result in improving trust in democratic institutions contributing to enhancing the transparency, effectiveness, accountability and legitimacy of public policy-making and allowing people to participate in the development of ethical standards for the use of such technologies. Second, improving educational tools for lifelong learning with AI models to improve accountability and educational data management. Dissemination, education and social participation will be integrated, measured and evaluated in innovative educational processes to make accessible all the educational technologies and content developed on AI about responsible and social innovation. A particular case will be presented regarding access for all to educational tools and public services. This accessibility requires cognitive adaptability because, many times, legal or administrative language is very complex. Not only for people with cognitive disabilities but also for old people or citizens at risk of educational or social discrimination. Artificial Intelligence natural language processing technologies can provide tools to translate legal, administrative, or educational texts to a more simple language that can be accessible to everybody. Despite technological advances in language processing and machine learning, this becomes a huge project if we really want to respect ethical and legal consequences because that kinds of consequences can only be achieved with civil and democratic engagement in two realms: 1) to democratically select texts that need and can be translated and 2) to involved citizens, experts and nonexperts, to produce and validate real examples of legal texts with cognitive adaptations to feed artificial intelligence algorithms for learning how to translate those texts to a more simple and accessible language, adapted to any kind of population.

Keywords: responsible research and innovation, AI social innovations, cognitive accessibility, public participation

Procedia PDF Downloads 90
10782 Factors Afecting the Academic Performance of In-Service Students in Science Educaction

Authors: Foster Chilufya

Abstract:

This study sought to determine factors that affect academic performance of mature age students in Science Education at University of Zambia. It was guided by Maslow’s Hierarchy of Needs. The theory provided relationship between achievement motivation and academic performance. A descriptive research design was used. Both Qualitative and Quantitative research methods were used to collect data from 88 respondents. Simple random and purposive sampling procedures were used to collect from the respondents. Concerning factors that motivate mature-age students to choose Science Education Programs, the following were cited: need for self-actualization, acquisition of new knowledge, encouragement from friends and family members, good performance at high school and diploma level, love for the sciences, prestige and desire to be promoted at places of work. As regards factors that affected the academic performance of mature-age students, both negative and positive factors were identified. These included: demographic factors such as age and gender, psychological characteristics such as motivation and preparedness to learn, self-set goals, self esteem, ability, confidence and persistence, student prior academic performance at high school and college level, social factors, institutional factors and the outcomes of the learning process. In order to address the factors that negatively affect academic performance of mature-age students, the following measures were identified: encouraging group discussions, encouraging interactive learning process, providing a conducive learning environment, reviewing Science Education curriculum and providing adequate learning materials. Based on these factors, it is recommended that, the School of Education introduces a program in Science Education specifically for students training to be teachers of science. Additionally, introduce majors in Physics Education, Biology Education, Chemistry Education and Mathematics Education relevant to what is taught in high schools.

Keywords: academic, performance, in-service, science

Procedia PDF Downloads 311
10781 Mapping Contested Sites - Permanence Of The Temporary Mouttalos Case Study

Authors: M. Hadjisoteriou, A. Kyriacou Petrou

Abstract:

This paper will discuss ideas of social sustainability in urban design and human behavior in multicultural contested sites. It will focus on the potential of the re-reading of the “site” through mapping that acts as a research methodology and will discuss the chosen site of Mouttalos, Cyprus as a place of multiple identities. Through a methodology of mapping using a bottom up approach, a process of disassembling derives that acts as a mechanism to re-examine space and place by searching for the invisible and the non-measurable, understanding the site through its detailed inhabitation patterns. The significance of this study lies in the use of mapping as an active form of thinking rather than a passive process of representation that allows for a new site to be discovered, giving multiple opportunities for adaptive urban strategies and socially engaged design approaches. We will discuss the above thematic based on the chosen contested site of Mouttalos, a small Turkish Cypriot neighbourhood, in the old centre of Paphos (Ktima), SW of Cyprus. During the political unrest, between Greek and Turkish Cypriot communities, in 1963, the area became an enclave to the Turkish Cypriots, excluding any contact with the rest of the area. Following the Turkish invasion of 1974, the residents left their homes, plots and workplaces, resettling in the North of Cyprus. Greek Cypriot refugees moved into the area. The presence of the Greek Cypriot refugees is still considered to be a temporary resettlement. The buildings and the residents themselves exist in a state of uncertainty. The site is documented through a series of parallel investigations into the physical conditions and history of the site. Research methodologies use the process of mapping to expose the complex and often invisible layers of information that coexist. By registering the site through the subjective experiences, and everyday stories of inhabitants, a series of cartographic recordings reveals the space between: happening and narrative and especially space between different cultures and religions. Research put specific emphasis on engaging the public, promoting social interaction, identifying spatial patterns of occupation by previous inhabitants through social media. Findings exposed three main areas of interest. Firstly we identified inter-dependent relationships between permanence and temporality, characterised by elements such us, signage through layers of time, past events and periodical street festivals, unfolding memory and belonging. Secondly issues of co-ownership and occupation, found through particular narratives of exchange between the two communities and through appropriation of space. Finally formal and informal inhabitation of space, revealed through the presence of informal shared back yards, alternative paths, porous street edges and formal and informal landmarks. The importance of the above findings, was achieving a shift of focus from the built infrastructure to the soft network of multiple and complex relations of dependence and autonomy. Proposed interventions for this contested site were informed and led by a new multicultural identity where invisible qualities were revealed though the process of mapping, taking on issues of layers of time, formal and informal inhabitation and the “permanence of the temporary”.

Keywords: contested sites, mapping, social sustainability, temporary urban strategies

Procedia PDF Downloads 421
10780 Trauma in the Unconsoled: A Crisis of the Self

Authors: Assil Ghariri

Abstract:

This article studies the process of rewriting the self through memory in Kazuo Ishiguro’s novel, the Unconsoled (1995). It deals with the journey that the protagonist Mr. Ryder takes through the unconscious, in search for his real self, in which trauma stands as an obstacle. The article uses Carl Jung’s theory of archetypes. Trauma, in this article, is discussed as one of the true obstacles of the unconscious that prevent people from realizing the truth about their selves.

Keywords: Carl Jung, Kazuo Ishiguro, memory, trauma

Procedia PDF Downloads 403
10779 Dairy Wastewater Treatment by Electrochemical and Catalytic Method

Authors: Basanti Ekka, Talis Juhna

Abstract:

Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.

Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide

Procedia PDF Downloads 144
10778 Knowledge Management Strategies within a Corporate Environment of Papers

Authors: Daniel J. Glauber

Abstract:

Knowledge transfer between personnel could benefit an organization’s improved competitive advantage in the marketplace from a strategic approach to knowledge management. The lack of information sharing between personnel could create knowledge transfer gaps while restricting the decision-making processes. Knowledge transfer between personnel can potentially improve information sharing based on an implemented knowledge management strategy. An organization’s capacity to gain more knowledge is aligned with the organization’s prior or existing captured knowledge. This case study attempted to understand the overall influence of a KMS within the corporate environment and knowledge exchange between personnel. The significance of this study was to help understand how organizations can improve the Return on Investment (ROI) of a knowledge management strategy within a knowledge-centric organization. A qualitative descriptive case study was the research design selected for this study. The lack of information sharing between personnel may create knowledge transfer gaps while restricting the decision-making processes. Developing a knowledge management strategy acceptable at all levels of the organization requires cooperation in support of a common organizational goal. Working with management and executive members to develop a protocol where knowledge transfer becomes a standard practice in multiple tiers of the organization. The knowledge transfer process could be measurable when focusing on specific elements of the organizational process, including personnel transition to help reduce time required understanding the job. The organization studied in this research acknowledged the need for improved knowledge management activities within the organization to help organize, retain, and distribute information throughout the workforce. Data produced from the study indicate three main themes including information management, organizational culture, and knowledge sharing within the workforce by the participants. These themes indicate a possible connection between an organizations KMS, the organizations culture, knowledge sharing, and knowledge transfer.

Keywords: knowledge transfer, management, knowledge management strategies, organizational learning, codification

Procedia PDF Downloads 442
10777 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 375
10776 Childhood Warscape, Experiences from Children of War Offer Key Design Decisions for Safer Built Environments

Authors: Soleen Karim, Meira Yasin, Rezhin Qader

Abstract:

Children’s books present a colorful life for kids around the world, their current environment or what they could potentially have- a home, two loving parents, a playground, and a safe school within a short walk or bus ride. These images are only pages in a donated book for children displaced by war. The environment they live in is significantly different. Displaced children are faced with a temporary life style filled with fear and uncertainty. Children of war associate various structural institutions with a trauma and cannot enter the space, even if it is for their own future development, such as a school. This paper is a collaborative effort with students of the Kennesaw State University architecture department, architectural designers and a mental health professional to address and link the design challenges and the psychological trauma for children of war. The research process consists of a) interviews with former refugees, b) interviews with current refugee children, c) personal understanding of space through one’s own childhood, d) literature review of tested design methods to address various traumas. Conclusion: In addressing the built environment for children of war, it is necessary to address mental health and well being through the creation of space that is sensitive to the needs of children. This is achieved by understanding critical design cues to evoke normalcy and safe space through program organization, color, and symbiosis of synthetic and natural environments. By involving the children suffering from trauma in the design process, aspects of the design are directly enhanced to serve the occupant. Neglecting to involve the participants creates a nonlinear design outcome and does not serve the needs of the occupant to afford them equal opportunity learning and growth experience as other children around the world.

Keywords: activist architecture, childhood education, childhood psychology, adverse childhood experiences

Procedia PDF Downloads 140
10775 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 120
10774 Blue Hydrogen Production Via Catalytic Aquathermolysis Coupled with Direct Carbon Dioxide Capture Via Adsorption

Authors: Sherif Fakher

Abstract:

Hydrogen has been gaining a lot of global attention as an uprising contributor in the energy sector. Labeled as an energy carrier, hydrogen is used in many industries and can be used to generate electricity via fuel cells. Blue hydrogen involves the production of hydrogen from hydrocarbons using different processes that emit CO₂. However, the CO₂ is captured and stored. Hence, very little environmental damage occurs during the hydrogen production process. This research investigates the ability to use different catalysts for the production of hydrogen from different hydrocarbon sources, including coal, oil, and gas, using a two-step Aquathermolysis reaction. The research presents the results of experiments conducted to evaluate different catalysts and also highlights the main advantages of this process over other blue hydrogen production methods, including methane steam reforming, autothermal reforming, and oxidation. Two methods of hydrogen generation were investigated including partial oxidation and aquathermolysis. For those two reactions, the reaction kinetics, thermodynamics, and medium were all investigated. Following this, experiments were conducted to test the hydrogen generation potential from both methods. The porous media tested were sandstone, ash, and prozzolanic material. The spent oils used were spent motor oil and spent vegetable oil from cooking. Experiments were conducted at temperatures up to 250 C and pressures up to 3000 psi. Based on the experimental results, mathematical models were developed to predict the hydrogen generation potential at higher thermodynamic conditions. Since both partial oxidation and aquathermolysis require relatively high temperatures to undergo, it was important to devise a method by which these high temperatures can be generated at a low cost. This was done by investigating two factors, including the porous media used and the reliance on the spent oil. Of all the porous media used, the ash had the highest thermal conductivity. The second step was the partial combustion of part of the spent oil to generate the heat needed to reach the high temperatures. This reduced the cost of the heat generation significantly. For the partial oxidation reaction, the spent oil was burned in the presence of a limited oxygen concentration to generate carbon monoxide. The main drawback of this process was the need for burning. This resulted in the generation of other harmful and environmentally damaging gases. Aquathermolysis does not rely on burning, which makes it the cleaner alternative. However, it needs much higher temperatures to run the reaction. When comparing the hydrogen generation potential for both using gas chromatography, aquathermolysis generated 23% more hydrogen using the same volume of spent oil compared to partial oxidation. This research introduces the concept of using spent oil for hydrogen production. This can be a very promising method to produce a clean source of energy using a waste product. This can also help reduce the reliance on freshwater for hydrogen generation which can divert the usage of freshwater to other more important applications.

Keywords: blue hydrogen production, catalytic aquathermolysis, direct carbon dioxide capture, CCUS

Procedia PDF Downloads 31
10773 Simulation-Based Validation of Safe Human-Robot-Collaboration

Authors: Titanilla Komenda

Abstract:

Human-machine-collaboration defines a direct interaction between humans and machines to fulfil specific tasks. Those so-called collaborative machines are used without fencing and interact with humans in predefined workspaces. Even though, human-machine-collaboration enables a flexible adaption to variable degrees of freedom, industrial applications are rarely found. The reasons for this are not technical progress but rather limitations in planning processes ensuring safety for operators. Until now, humans and machines were mainly considered separately in the planning process, focusing on ergonomics and system performance respectively. Within human-machine-collaboration, those aspects must not be seen in isolation from each other but rather need to be analysed in interaction. Furthermore, a simulation model is needed that can validate the system performance and ensure the safety for the operator at any given time. Following on from this, a holistic simulation model is presented, enabling a simulative representation of collaborative tasks – including both, humans and machines. The presented model does not only include a geometry and a motion model of interacting humans and machines but also a numerical behaviour model of humans as well as a Boole’s probabilistic sensor model. With this, error scenarios can be simulated by validating system behaviour in unplanned situations. As these models can be defined on the basis of Failure Mode and Effects Analysis as well as probabilities of errors, the implementation in a collaborative model is discussed and evaluated regarding limitations and simulation times. The functionality of the model is shown on industrial applications by comparing simulation results with video data. The analysis shows the impact of considering human factors in the planning process in contrast to only meeting system performance. In this sense, an optimisation function is presented that meets the trade-off between human and machine factors and aids in a successful and safe realisation of collaborative scenarios.

Keywords: human-machine-system, human-robot-collaboration, safety, simulation

Procedia PDF Downloads 361
10772 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River

Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang

Abstract:

The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.

Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander

Procedia PDF Downloads 319
10771 Historical Analysis of Two Types of Urbanization Changing Both the Aspect and Identity of a Town in Transylvania, Romania

Authors: Ágota Ladó

Abstract:

Miercurea Ciuc is a town in the historical region of Szeklerland in Transylvania, Romania, with a predominantly Hungarian population (its name in Hungarian being Csíkszereda) having an urban landscape and environment that has been shaped dramatically by different perceptions of urbanization during the history. The town has been part of Hungary and the Austro-Hungarian Empire before the First World War. It even got an important role, becoming in 1876 the seat and administrative center of the historical Csík county. This marks the beginning of the first urbanization process: new administrative buildings, railways, a railway station, a hospital, a Redoute and new schools have been built, new streets have been opened. However, not only the public facilities have changed: the center of the town with its private houses has also transformed, new, modern decorative and lifestyle elements have appeared. One of the streets from the town center, Kossuth street, has been featured on many postcards of the time; even a novel has mentioned it as a symbol of modern urbanization. Right after the First World War, the town became part of Romania and aside from a short interruption (between 1940 and 1944), it is still part of it. The beginning of the second major urbanization process – exactly one hundred years later - is marked by the visit of the communist leader Nicolae Ceaușescu in Miercurea Ciuc on the 6th of October 1976. In the upcoming years, he decided and started to demolish the old Kossuth street and to construct a new avenue with tall blocks of flats according to the principles of socialist urbanization. No other Transylvanian settlement has gone through such systematic abolition of its historical center and urban history during the Communist era. Not only the urban landscape has been affected. The collective memory and contemporary identity of the locals are also violated by this recent transformation of the town: important spaces, buildings, venues of activities and events simply cannot be localized, thus understood - by the younger generations.

Keywords: communist era, historical urban landscape, urban identity, urbanization

Procedia PDF Downloads 179
10770 Performance Evaluation of GPS/INS Main Integration Approach

Authors: Othman Maklouf, Ahmed Adwaib

Abstract:

This paper introduces a comparative study between the main GPS/INS coupling schemes, this will include the loosely coupled and tightly coupled configurations, several types of situations and operational conditions, in which the data fusion process is done using Kalman filtering. This will include the importance of sensors calibration as well as the alignment of the strap down inertial navigation system. The limitations of the inertial navigation systems are investigated.

Keywords: GPS, INS, Kalman filter, sensor calibration, navigation system

Procedia PDF Downloads 590
10769 Tritium Activities in Romania, Potential Support for Development of ITER Project

Authors: Gheorghe Ionita, Sebastian Brad, Ioan Stefanescu

Abstract:

In any fusion device, tritium plays a key role both as a fuel component and, due to its radioactivity and easy incorporation, as tritiated water (HTO). As for the ITER project, to reduce the constant potential of tritium emission, there will be implemented a Water Detritiation System (WDS) and an Isotopic Separation System (ISS). In the same time, during operation of fission CANDU reactors, the tritium content increases in the heavy water used as moderator and cooling agent (due to neutron activation) and it has to be reduced, too. In Romania, at the National Institute for Cryogenics and Isotopic Technologies (ICIT Rm-Valcea), there is an Experimental Pilot Plant for Tritium Removal (Exp. TRF), with the aim of providing technical data on the design and operation of an industrial plant for heavy water depreciation of CANDU reactors from Cernavoda NPP. The selected technology is based on the catalyzed isotopic exchange process between deuterium and liquid water (LPCE) combined with the cryogenic distillation process (CD). This paper presents an updated review of activities in the field carried out in Romania after the year 2000 and in particular those related to the development and operation of Tritium Removal Experimental Pilot Plant. It is also presented a comparison between the experimental pilot plant and industrial plant to be implemented at Cernavoda NPP. The similarities between the experimental pilot plant from ICIT Rm-Valcea and water depreciation and isotopic separation systems from ITER are also presented and discussed. Many aspects or 'opened issues' relating to WDS and ISS could be checked and clarified by a special research program, developed within ExpTRF. By these achievements and results, ICIT Rm - Valcea has proved its expertise and capability concerning tritium management therefore its competence may be used within ITER project.

Keywords: ITER project, heavy water detritiation, tritium removal, isotopic exchange

Procedia PDF Downloads 413
10768 Heritage Tourism and the Changing Rural Landscape: Case Study of Cultural Landscape of Honghe Hani Rice Terraces

Authors: Yan Wang; Mathis Stock

Abstract:

The World Heritage Site of Honghe Hani rice terrace, also a marginal rural region in Southern China, is undergoing rapid change because of urbanization and heritage tourism. Influenced by out-migration and changing ways of living in the urbanization process, the place sees a tendency of losing its rice terrace landscape, traditional housings and other forms of cultural traditions. However, heritage tourism tends to keep the past, valorize them for tourism purposes and diversifies rural livelihood strategies. The place stands at this development trajectories, where the same resources are subjected to different uses by different actors. The research seeks to answer the questions of how the site is transformed and co-constructed by different institutions, practices and actors, and the how heritage tourism affects local livelihood. The research aims to describe the transformation of villages, rice terraces, and cultural traditions, analyze the place-making process, and assess the role of heritage tourism in local livelihood transition. The research uses a mixed of methods including direct observation, participant observation, interviews; collects various data of images, words, narratives, and statistics, and analyze them qualitatively and qualitatively. Theoretically, it is hoped that the research would reexamine the concept of heritage, the world heritage practice from UNESCO, reveal the conflicts it entails in development and brings more thoughts from a functional perspective on heritage in relation to rural development. Practically, it is also anticipated that the research could access the linkage between heritage tourism and local livelihood, and generate concrete suggestions on how tourism could engage locals and improve their livelihood.

Keywords: cultural landscape, Hani rice terraces, heritage tourism, livelihood strategy, place making, rural development, transformation

Procedia PDF Downloads 231
10767 Enhancing Higher Education Teaching and Learning Processes: Examining How Lecturer Evaluation Make a Difference

Authors: Daniel Asiamah Ameyaw

Abstract:

This research attempts to investigate how lecturer evaluation makes a difference in enhancing higher education teaching and learning processes. The research questions to guide this research work states first as, “What are the perspectives on the difference made by evaluating academic teachers in order to enhance higher education teaching and learning processes?” and second, “What are the implications of the findings for Policy and Practice?” Data for this research was collected mainly through interviewing and partly documents review. Data analysis was conducted under the framework of grounded theory. The findings showed that for individual lecturer level, lecturer evaluation provides a continuous improvement of teaching strategies, and serves as source of data for research on teaching. At the individual student level, it enhances students learning process; serving as source of information for course selection by students; and by making students feel recognised in the educational process. At the institutional level, it noted that lecturer evaluation is useful in personnel and management decision making; it assures stakeholders of quality teaching and learning by setting up standards for lecturers; and it enables institutions to identify skill requirement and needs as a basis for organising workshops. Lecturer evaluation is useful at national level in terms of guaranteeing the competencies of graduates who then provide the needed manpower requirement of the nation. Besides, it mentioned that resource allocation to higher educational institution is based largely on quality of the programmes being run by the institution. The researcher concluded, that the findings have implications for policy and practice, therefore, higher education managers are expected to ensure that policy is implemented as planned by policy-makers so that the objectives can successfully be achieved.

Keywords: academic quality, higher education, lecturer evaluation, teaching and learning processes

Procedia PDF Downloads 143
10766 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 87
10765 Ensuring Quality in DevOps Culture

Authors: Sagar Jitendra Mahendrakar

Abstract:

Integrating quality assurance (QA) practices into DevOps culture has become increasingly important in modern software development environments. Collaboration, automation and continuous feedback characterize the seamless integration of DevOps development and operations teams to achieve rapid and reliable software delivery. In this context, quality assurance plays a key role in ensuring that software products meet the highest quality, performance and reliability standards throughout the development life cycle. This brief explores key principles, challenges, and best practices related to quality assurance in a DevOps culture. This emphasizes the importance of quality transfer in the development process, as quality control processes are integrated in every step of the DevOps process. Automation is the cornerstone of DevOps quality assurance, enabling continuous testing, integration and deployment and providing rapid feedback for early problem identification and resolution. In addition, the summary addresses the cultural and organizational challenges of implementing quality assurance in DevOps, emphasizing the need to foster collaboration, break down silos, and promote a culture of continuous improvement. It also discusses the importance of toolchain integration and capability development to support effective QA practices in DevOps environments. Moreover, the abstract discusses the cultural and organizational challenges in implementing QA within DevOps, emphasizing the need for fostering collaboration, breaking down silos, and nurturing a culture of continuous improvement. It also addresses the importance of toolchain integration and skills development to support effective QA practices within DevOps environments. Overall, this collection works at the intersection of QA and DevOps culture, providing insights into how organizations can use DevOps principles to improve software quality, accelerate delivery, and meet the changing demands of today's dynamic software. landscape.

Keywords: quality engineer, devops, automation, tool

Procedia PDF Downloads 58
10764 The Design and Implementation of an Enhanced 2D Mesh Switch

Authors: Manel Langar, Riad Bourguiba, Jaouhar Mouine

Abstract:

In this paper, we propose the design and implementation of an enhanced wormhole virtual channel on chip router. It is a heart of a mesh NoC using the XY deterministic routing algorithm. It is characterized by its simple virtual channel allocation strategy which allows reducing area and complexity of connections without affecting the performance. We implemented our router on a Tezzaron process to validate its performances. This router is a basic element that will be used later to design a 3D mesh NoC.

Keywords: NoC, mesh, router, 3D NoC

Procedia PDF Downloads 568
10763 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 119
10762 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 295
10761 Fabrication of Superhydrophobic Galvanized Steel by Sintering Zinc Nanopowder

Authors: Francisco Javier Montes Ruiz-Cabello, Guillermo Guerrero-Vacas, Sara Bermudez-Romero, Miguel Cabrerizo Vilchez, Miguel Angel Rodriguez-Valverde

Abstract:

Galvanized steel is one of the widespread metallic materials used in industry. It consists on a iron-based alloy (steel) coated with a layer of zinc with variable thickness. The zinc is aimed to prevent the inner steel from corrosion and staining. Its production is cheaper than the stainless steel and this is the reason why it is employed in the construction of materials with large dimensions in aeronautics, urban/ industrial edification or ski-resorts. In all these applications, turning the natural hydrophilicity of the metal surface into superhydrophobicity is particularly interesting and would open a wide variety of additional functionalities. However, producing a superhydrophobic surface on galvanized steel may be a very difficult task. Superhydrophobic surfaces are characterized by a specific surface texture which is reached either by coating the surface with a material that incorporates such texture, or by conducting several roughening methods. Since galvanized steel is already a coated material, the incorporation of a second coating may be undesired. On the other hand, the methods that are recurrently used to incorporate the surface texture leading to superhydrophobicity in metals are aggressive and may damage their surface. In this work, we used a novel strategy which goal is to produce superhydrophobic galvanized steel by a two-step non-aggressive process. The first process is aimed to create a hierarchical structure by incorporating zinc nanoparticles sintered on the surface at a temperature slightly lower than the zinc’s melting point. The second one is a hydrophobization by a thick fluoropolymer layer deposition. The wettability of the samples is characterized in terms of tilting plate and bouncing drop experiments, while the roughness is analyzed by confocal microscopy. The durability of the produced surfaces was also explored.

Keywords: galvanaized steel, superhydrophobic surfaces, sintering nanoparticles, zinc nanopowder

Procedia PDF Downloads 150
10760 Re-Constructing the Research Design: Dealing with Problems and Re-Establishing the Method in User-Centered Research

Authors: Kerem Rızvanoğlu, Serhat Güney, Emre Kızılkaya, Betül Aydoğan, Ayşegül Boyalı, Onurcan Güden

Abstract:

This study addresses the re-construction and implementation process of the methodological framework developed to evaluate how locative media applications accompany the urban experiences of international students coming to Istanbul with exchange programs in 2022. The research design was built on a three-stage model. The research team conducted a qualitative questionnaire in the first stage to gain exploratory data. These data were then used to form three persona groups representing the sample by applying cluster analysis. In the second phase, a semi-structured digital diary study was carried out on a gamified task list with a sample selected from the persona groups. This stage proved to be the most difficult to obtaining valid data from the participant group. The research team re-evaluated the design of this second phase to reach the participants who will perform the tasks given by the research team while sharing their momentary city experiences, to ensure the daily data flow for two weeks, and to increase the quality of the obtained data. The final stage, which follows to elaborate on the findings, is the “Walk & Talk,” which is completed with face-to-face and in-depth interviews. It has been seen that the multiple methods used in the research process contribute to the depth and data diversity of the research conducted in the context of urban experience and locative technologies. In addition, by adapting the research design to the experiences of the users included in the sample, the differences and similarities between the initial research design and the research applied are shown.

Keywords: digital diary study, gamification, multi-model research, persona analysis, research design for urban experience, user-centered research, “Walk & Talk”

Procedia PDF Downloads 171
10759 From Creativity to Innovation: Tracking Rejected Ideas

Authors: Lisete Barlach, Guilherme Ary Plonski

Abstract:

Innovative ideas are not always synonymous with business opportunities. Any idea can be creative and not recognized as a potential project in which money and time will be invested, among other resources. Even in firms that promote and enhance innovation, there are two 'check-points', the first corresponding to the acknowledgment of the idea as creative and the second, its consideration as a business opportunity. Both the recognition of new business opportunities or new ideas involve cognitive and psychological frameworks which provide individuals with a basis for noticing connections between seemingly independent events or trends as if they were 'connecting the dots'. It also involves prototypes-representing the most typical member of a certain category–functioning as 'templates' for this recognition. There is a general assumption that these kinds of evaluation processes develop through experience, explaining why expertise plays a central role in this process: the more experienced a professional, the easier for him (her) to identify new opportunities in business. But, paradoxically, an increase in expertise can lead to the inflexibility of thought due to automation of procedures. And, besides this, other cognitive biases can also be present, because new ideas or business opportunities generally depend on heuristics, rather than on established algorithms. The paper presents a literature review about the Einstellung effect by tracking famous cases of rejected ideas, extracted from historical records. It also presents the results of empirical research, with data upon rejected ideas gathered from two different environments: projects rejected during first semester of 2017 at a large incubator center in Sao Paulo and ideas proposed by employees that were rejected by a well-known business company, at its Brazilian headquarter. There is an implicit assumption that Einstellung effect tends to be more and more present in contemporaneity, due to time pressure upon decision-making and idea generation process. The analysis discusses desirability, viability, and feasibility as elements that affect decision-making.

Keywords: cognitive biases, Einstellung effect, recognition of business opportunities, rejected ideas

Procedia PDF Downloads 204
10758 The Effect of Metal Transfer Modes on Mechanical Properties of 3CR12 Stainless Steel

Authors: Abdullah Kaymakci, Daniel M. Madyira, Ntokozo Nkwanyana

Abstract:

The effect of metal transfer modes on mechanical properties of welded 3CR12 stainless steel were investigated. This was achieved by butt welding 10 mm thick plates of 3CR12 in different positions while varying the welding positions for different metal transfer modes. The ASME IX: 2010 (Welding and Brazing Qualifications) code was used as a basis for welding variables. The material and the thickness of the base metal were kept constant together with the filler metal, shielding gas and joint types. The effect of the metal transfer modes on the microstructure and the mechanical properties of the 3CR12 steel was then investigated as it was hypothesized that the change in welding positions will affect the transfer modes partly due to the effect of gravity. The microscopic examination revealed that the substrate was characterized by dual phase microstructure, that is, alpha phase and beta phase grain structures. Using the spectroscopic examination results and the ferritic factor calculation had shown that the microstructure was expected to be ferritic-martensitic during air cooling process. The tested tensile strength and Charpy impact energy were measured to be 498 MPa and 102 J which were in line with mechanical properties given in the material certificate. The heat input in the material was observed to be greater than 1 kJ/mm which is the limiting factor for grain growth during the welding process. Grain growths were observed in the heat affected zone of the welded materials. Ferritic-martensitic microstructure was observed in the microstructure during the microscopic examination. The grain growth altered the mechanical properties of the test material. Globular down hand had higher mechanical properties than spray down hand. Globular vertical up had better mechanical properties than globular vertical down.

Keywords: welding, metal transfer modes, stainless steel, microstructure, hardness, tensile strength

Procedia PDF Downloads 252
10757 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 621
10756 Hybrid Solutions in Physicochemical Processes for the Removal of Turbidity in Andean Reservoirs

Authors: María Cárdenas Gaudry, Gonzalo Ramces Fano Miranda

Abstract:

Sediment removal is very important in the purification of water, not only for reasons of visual perception but also because of its association with odor and taste problems. The Cuchoquesera reservoir, which is in the Andean region of Ayacucho (Peru) at an altitude of 3,740 meters above sea level, visually presents suspended particles and organic impurities indicating that it contains water of dubious quality to deduce that it is suitable for direct consumption of human beings. In order to quantitatively know the degree of impurities, water quality monitoring was carried out from February to August 2018, in which four sampling stations were established in the reservoir. The selected measured parameters were electrical conductivity, total dissolved solids, pH, color, turbidity, and sludge volume. The indicators of the studied parameters exceed the permissible limits except for electrical conductivity (190 μS/cm) and total dissolved solids (255 mg/L). In this investigation, the best combination and the optimal doses of reagents were determined that allowed the removal of sediments from the waters of the Cuchoquesera reservoir, through the physicochemical process of coagulation-flocculation. In order to improve this process during the rainy season, six combinations of reagents were evaluated, made up of three coagulants (ferric chloride, ferrous sulfate, and aluminum sulfate) and two natural flocculants: prickly pear powder (Opuntia ficus-indica) and tara gum (Caesalpinia spinoza). For each combination of reagents, jar tests were developed following the central composite experimental design (CCED), where the design factors were the doses of coagulant and flocculant and the initial turbidity. The results of the jar tests were adjusted to mathematical models, obtaining that to treat the water from the Cuchoquesera reservoir, with a turbidity of 150 UTN and a color of 137 U Pt-Co, 27.9 mg/L of the coagulant aluminum sulfate with 3 mg/L of the natural tara gum flocculant to produce a purified water quality of 1.7 UTN of turbidity and 3.2 U Pt-Co of apparent color. The estimated cost of the dose of coagulant and flocculant found was 0.22 USD/m³. This is how “grey-green” technologies can be used as a combination in nature-based solutions in water treatment, in this case, to achieve potability, making it more sustainable, especially economically, if green technology is available at the site of application of the nature-based hybrid solution. This research is a demonstration of the compatibility of natural coagulants/flocculants with other treatment technologies in the integrated/hybrid treatment process, such as the possibility of hybridizing natural coagulants with other types of coagulants.

Keywords: prickly pear powder, tara gum, nature-based solutions, aluminum sulfate, jar test, turbidity, coagulation, flocculation

Procedia PDF Downloads 108