Search results for: Mutare urban timber processing factories
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7668

Search results for: Mutare urban timber processing factories

138 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density

Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita

Abstract:

Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.

Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite

Procedia PDF Downloads 104
137 Digital Health During a Pandemic: Critical Analysis of the COVID-19 Contact Tracing Apps

Authors: Mohanad Elemary, Imose Itua, Rajeswari B. Matam

Abstract:

Virologists and public health experts have been predicting potential pandemics from coronaviruses for decades. The viruses which caused the SARS and MERS pandemics and the Nipah virus led to many lost lives, but still, the COVID-19 pandemic caused by the SARS-CoV2 virus surprised many scientific communities, experts, and governments with its ease of transmission and its pathogenicity. Governments of various countries reacted by locking down entire populations to their homes to combat the devastation caused by the virus, which led to a loss of livelihood and economic hardship to many individuals and organizations. To revive national economies and support their citizens in resuming their lives, governments focused on the development and use of contact tracing apps as a digital way to track and trace exposure. Google and Apple introduced the Exposure Notification Systems (ENS) framework. Independent organizations and countries also developed different frameworks for contact tracing apps. The efficiency, popularity, and adoption rate of these various apps have been different across countries. In this paper, we present a critical analysis of the different contact tracing apps with respect to their efficiency, adoption rate and general perception, and the governmental strategies and policies, which led to the development of the applications. When it comes to the European countries, each of them followed an individualistic approach to the same problem resulting in different realizations of a similarly functioning application with differing results of use and acceptance. The study conducted an extensive review of existing literature, policies, and reports across multiple disciplines, from which a framework was developed and then validated through interviews with six key stakeholders in the field, including founders and executives in digital health startups and corporates as well as experts from international organizations like The World Health Organization. A framework of best practices and tactics is the result of this research. The framework looks at three main questions regarding the contact tracing apps; how to develop them, how to deploy them, and how to regulate them. The findings are based on the best practices applied by governments across multiple countries, the mistakes they made, and the best practices applied in similar situations in the business world. The findings include multiple strategies when it comes to the development milestone regarding establishing frameworks for cooperation with the private sector and how to design the features and user experience of the app for a transparent, effective, and rapidly adaptable app. For the deployment section, several tactics were discussed regarding communication messages, marketing campaigns, persuasive psychology, and the initial deployment scale strategies. The paper also discusses the data privacy dilemma and how to build for a more sustainable system of health-related data processing and utilization. This is done through principles-based regulations specific for health data to allow for its avail for the public good. This framework offers insights into strategies and tactics that could be implemented as protocols for future public health crises and emergencies whether global or regional.

Keywords: contact tracing apps, COVID-19, digital health applications, exposure notification system

Procedia PDF Downloads 135
136 Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons

Authors: Saule Mussabekova

Abstract:

A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements.

Keywords: analysis of elemental composition of hair, forensic medical research of hair, identification of unknown dead bodies, microelements

Procedia PDF Downloads 142
135 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour

Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira

Abstract:

Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.

Keywords: child behaviour, functional connectivity, imaging, Shannon entropy

Procedia PDF Downloads 202
134 Bio-Inspired Information Complexity Management: From Ant Colony to Construction Firm

Authors: Hamza Saeed, Khurram Iqbal Ahmad Khan

Abstract:

Effective information management is crucial for any construction project and its success. Primary areas of information generation are either the construction site or the design office. There are different types of information required at different stages of construction involving various stakeholders creating complexity. There is a need for effective management of information flows to reduce uncertainty creating complexity. Nature provides a unique perspective in terms of dealing with complexity, in particular, information complexity. System dynamics methodology provides tools and techniques to address complexity. It involves modeling and simulation techniques that help address complexity. Nature has been dealing with complex systems since its creation 4.5 billion years ago. It has perfected its system by evolution, resilience towards sudden changes, and extinction of unadaptable and outdated species that are no longer fit for the environment. Nature has been accommodating the changing factors and handling complexity forever. Humans have started to look at their natural counterparts for inspiration and solutions for their problems. This brings forth the possibility of using a biomimetics approach to improve the management practices used in the construction sector. Ants inhabit different habitats. Cataglyphis and Pogonomyrmex live in deserts, Leafcutter ants reside in rainforests, and Pharaoh ants are native to urban developments of tropical areas. Detailed studies have been done on fifty species out of fourteen thousand discovered. They provide the opportunity to study the interactions in diverse environments to generate collective behavior. Animals evolve to better adapt to their environment. The collective behavior of ants emerges from feedback through interactions among individuals, based on a combination of three basic factors: The patchiness of resources in time and space, operating cost, environmental stability, and the threat of rupture. If resources appear in patches through time and space, the response is accelerating and non-linear, and if resources are scattered, the response follows a linear pattern. If the acquisition of energy through food is faster than energy spent to get it, the default is to continue with an activity unless it is halted for some reason. If the energy spent is rather higher than getting it, the default changes to stay put unless activated. Finally, if the environment is stable and the threat of rupture is low, the activation and amplification rate is slow but steady. Otherwise, it is fast and sporadic. To further study the effects and to eliminate the environmental bias, the behavior of four different ant species were studied, namely Red Harvester ants (Pogonomyrmex Barbatus), Argentine ants (Linepithema Humile), Turtle ants (Cephalotes Goniodontus), Leafcutter ants (Genus: Atta). This study aims to improve the information system in the construction sector by providing a guideline inspired by nature with a systems-thinking approach, using system dynamics as a tool. Identified factors and their interdependencies were analyzed in the form of a causal loop diagram (CLD), and construction industry professionals were interviewed based on the developed CLD, which was validated with significance response. These factors and interdependencies in the natural system corresponds with the man-made systems, providing a guideline for effective use and flow of information.

Keywords: biomimetics, complex systems, construction management, information management, system dynamics

Procedia PDF Downloads 137
133 Membrane Technologies for Obtaining Bioactive Fractions from Blood Main Protein: An Exploratory Study for Industrial Application

Authors: Fatima Arrutia, Francisco Amador Riera

Abstract:

The meat industry generates large volumes of blood as a result of meat processing. Several industrial procedures have been implemented in order to treat this by-product, but are focused on the production of low-value products, and in many cases, blood is simply discarded as waste. Besides, in addition to economic interests, there is an environmental concern due to bloodborne pathogens and other chemical contaminants found in blood. Consequently, there is a dire need to find extensive uses for blood that can be both applicable to industrial scale and able to yield high value-added products. Blood has been recognized as an important source of protein. The main blood serum protein in mammals is serum albumin. One of the top trends in food market is functional foods. Among them, bioactive peptides can be obtained from protein sources by microbiological fermentation or enzymatic and chemical hydrolysis. Bioactive peptides are short amino acid sequences that can have a positive impact on health when administered. The main drawback for bioactive peptide production is the high cost of the isolation, purification and characterization techniques (such as chromatography and mass spectrometry) that make unaffordable the scale-up. On the other hand, membrane technologies are very suitable to apply to the industry because they offer a very easy scale-up and are low-cost technologies, compared to other traditional separation methods. In this work, the possibility of obtaining bioactive peptide fractions from serum albumin by means of a simple procedure of only 2 steps (hydrolysis and membrane filtration) was evaluated, as an exploratory study for possible industrial application. The methodology used in this work was, firstly, a tryptic hydrolysis of serum albumin in order to release the peptides from the protein. The protein was previously subjected to a thermal treatment in order to enhance the enzyme cleavage and thus the peptide yield. Then, the obtained hydrolysate was filtered through a nanofiltration/ultrafiltration flat rig at three different pH values with two different membrane materials, so as to compare membrane performance. The corresponding permeates were analyzed by liquid chromatography-tandem mass spectrometry technology in order to obtain the peptide sequences present in each permeate. Finally, different concentrations of every permeate were evaluated for their in vitro antihypertensive and antioxidant activities though ACE-inhibition and DPPH radical scavenging tests. The hydrolysis process with the previous thermal treatment allowed achieving a degree of hydrolysis of the 49.66% of the maximum possible. It was found that peptides were best transmitted to the permeate stream at pH values that corresponded to their isoelectric points. Best selectivity between peptide groups was achieved at basic pH values. Differences in peptide content were found between membranes and also between pH values for the same membrane. The antioxidant activity of all permeates was high compared with the control only for the highest dose. However, antihypertensive activity was best for intermediate concentrations, rather than higher or lower doses. Therefore, although differences between them, all permeates were promising regarding antihypertensive and antioxidant properties.

Keywords: bioactive peptides, bovine serum albumin, hydrolysis, membrane filtration

Procedia PDF Downloads 199
132 Method of Nursing Education: History Review

Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán

Abstract:

Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.

Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education

Procedia PDF Downloads 112
131 Learning Recomposition after the Remote Period with Finalist Students of the Technical Course in the Environment of the Ifpa, Paragominas Campus, Pará State, Brazilian Amazon

Authors: Liz Carmem Silva-Pereira, Raffael Alencar Mesquita Rodrigues, Francisco Helton Mendes Barbosa, Emerson de Freitas Ferreira

Abstract:

Due to the Covid-19 pandemic declared in March 2020 by the World Health Organization, the way of social coexistence across the planet was affected, especially in educational processes, from the implementation of the remote modality as a teaching strategy. This teaching-learning modality caused a change in the routine and learning of basic education students, which resulted in serious consequences for the return to face-to-face teaching in 2021. 2022, at the Federal Institute of Education, Science and Technology of Pará (IFPA) – Campus Paragominas had their training process severely affected, having studied the initial half of their training in the remote modality, which compromised the carrying out of practical classes, technical visits and field classes, essential for the student formation on the environmental technician. With the objective of promoting the recomposition of these students' learning after returning to the face-to-face modality, an educational strategy was developed in the last period of the course. As teaching methodologies were used for research as an educational principle, the integrative project and the parallel recovery action applied jointly, aiming at recomposing the basic knowledge of the natural sciences, together with the technical knowledge of the environmental area applied to the course. The project assisted 58 finalist students of the environmental technical course. A research instrument was elaborated with parameters of evaluation of the environmental quality for study in 19 collection points, in the Uraim River urban hydrographic basin, in the Paragominas City – Pará – Brazilian Amazon. Students were separated into groups under the professors' and laboratory assistants’ orientation, and in the field, they observed and evaluated the places' environmental conditions and collected physical data and water samples, which were taken to the chemistry and biology laboratories at Campus Paragominas for further analysis. With the results obtained, each group prepared a technical report on the environmental conditions of each evaluated point. This work methodology enabled the practical application of theoretical knowledge received in various disciplines during the remote teaching modality, contemplating the integration of knowledge, people, skills, and abilities for the best technical training of finalist students. At the activity end, the satisfaction of the involved students in the project was evaluated, through a form, with the signing of the informed consent term, using the Likert scale as an evaluation parameter. The results obtained in the satisfaction survey were: on the use of research projects within the disciplines attended, 82% of satisfaction was obtained; regarding the revision of contents in the execution of the project, 84% of satisfaction was obtained; regarding the acquired field experience, 76.9% of satisfaction was obtained, regarding the laboratory experience, 86.2% of satisfaction was obtained, and regarding the use of this methodology as parallel recovery, 71.8% was obtained of satisfaction. In addition to the excellent performance of students in acquiring knowledge, it was possible to remedy the deficiencies caused by the absence of practical classes, technical visits, and field classes, which occurred during the execution of the remote teaching modality, fulfilling the desired educational recomposition.

Keywords: integrative project, parallel recovery, research as an educational principle, teaching-learning

Procedia PDF Downloads 65
130 Ethanolamine Detection with Composite Films

Authors: S. A. Krutovertsev, A. E. Tarasova, L. S. Krutovertseva, O. M. Ivanova

Abstract:

The aim of the work was to get stable sensitive films with good sensitivity to ethanolamine (C2H7NO) in air. Ethanolamine is used as adsorbent in different processes of gas purification and separation. Besides it has wide industrial application. Chemical sensors of sorption type are widely used for gas analysis. Their behavior is determined by sensor characteristics of sensitive sorption layer. Forming conditions and characteristics of chemical gas sensors based on nanostructured modified silica films activated by different admixtures have been studied. As additives molybdenum containing polyoxometalates of the eighteen series were incorporated in silica films. The method of hydrolythic polycondensation from tetraethyl orthosilicate solutions was used for forming such films in this work. The method’s advantage is a possibility to introduce active additives directly into an initial solution. This method enables to obtain sensitive thin films with high specific surface at room temperature. Particular properties make polyoxometalates attractive as active additives for forming of gas-sensitive films. As catalyst of different redox processes, they can either accelerate the reaction of the matrix with analyzed gas or interact with it, and it results in changes of matrix’s electrical properties Polyoxometalates based films were deposited on the test structures manufactured by microelectronic planar technology with interdigitated electrodes. Modified silica films were deposited by a casting method from solutions based on tetraethyl orthosilicate and polyoxometalates. Polyoxometalates were directly incorporated into initial solutions. Composite nanostructured films were deposited by drop casting method on test structures with a pair of interdigital metal electrodes formed at their surface. The sensor’s active area was 4.0 x 4.0 mm, and electrode gap was egual 0.08 mm. Morphology of the layers surface were studied with Solver-P47 scanning probe microscope (NT-MDT, Russia), the infrared spectra were investigated by a Bruker EQUINOX 55 (Germany). The conditions of film formation varied during the tests. Electrical parameters of the sensors were measured electronically in real-time mode. Films had highly developed surface with value of 450 m2/g and nanoscale pores. Thickness of them was 0,2-0,3 µm. The study shows that the conditions of the environment affect markedly the sensors characteristics, which can be improved by choosing of the right procedure of forming and processing. Addition of polyoxometalate into silica film resulted in stabilization of film mass and changed markedly of electrophysical characteristics. Availability of Mn3P2Mo18O62 into silica film resulted in good sensitivity and selectivity to ethanolamine. Sensitivity maximum was observed at weight content of doping additive in range of 30–50% in matrix. With ethanolamine concentration changing from 0 to 100 ppm films’ conductivity increased by 10-12 times. The increase of sensor’s sensitivity was received owing to complexing reaction of tested substance with cationic part of polyoxometalate. This fact results in intramolecular redox reaction which sharply change electrophysical properties of polyoxometalate. This process is reversible and takes place at room temperature.

Keywords: ethanolamine, gas analysis, polyoxometalate, silica film

Procedia PDF Downloads 210
129 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 119
128 Pedagogical Opportunities of Physics Education Technology Interactive Simulations for Secondary Science Education in Bangladesh

Authors: Mohosina Jabin Toma, Gerald Tembrevilla, Marina Milner-Bolotin

Abstract:

Science education in Bangladesh is losing its appeal at an alarming rate due to the lack of science laboratory equipment, excessive teacher-student ratio, and outdated teaching strategies. Research-based educational technologies aim to address some of the problems faced by teachers who have limited access to laboratory resources, like many Bangladeshi teachers. Physics Education Technology (PhET) research team has been developing science and mathematics interactive simulations to help students develop deeper conceptual understanding. Still, PhET simulations are rarely used in Bangladesh. The purpose of this study is to explore Bangladeshi teachers’ challenges in learning to implement PhET-enhanced pedagogies and examine teachers’ views on PhET’s pedagogical opportunities in secondary science education. Since it is a new technology for Bangladesh, seven workshops on PhET were conducted in Dhaka city for 129 in-service and pre-service teachers in the winter of 2023 prior to data collection. This study followed an explanatory mixed method approach that included a pre-and post-workshop survey and five semi-structured interviews. Teachers participated in the workshops voluntarily and shared their experiences at the end. Teachers’ challenges were also identified from workshop discussions and observations. The interviews took place three to four weeks after the workshop and shed light on teachers’ experiences of using PhET in actual classroom settings. The results suggest that teachers had difficulty handling new technology; hence, they recommended preparing a booklet and Bengali YouTube videos on PhET to assist them in overcoming their struggles. Teachers also faced challenges in using any inquiry-based learning approach due to the content-loaded curriculum and exam-oriented education system, as well as limited experience with inquiry-based education. The short duration of classes makes it difficult for them to design PhET activities. Furthermore, considering limited access to computers and the internet in school, teachers think PhET simulations can bring positive changes if used in homework activities. Teachers also think they lack pedagogical skills and sound content knowledge to take full advantage of PhET. They highly appreciated the workshops and proposed that the government designs some teacher training modules on how to incorporate PhET simulations. Despite all the challenges, teachers believe PhET can enhance student learning, ensure student engagement and increase student interest in STEM Education. Considering the lack of science laboratory equipment, teachers recognized the potential of PhET as a supplement to hands-on activities for secondary science education in Bangladesh. They believed that if PhET develops more curriculum-relevant sims, it will bring revolutionary changes to how Bangladeshi students learn science. All the participating teachers in this study came from two organizations, and all the workshops took place in urban areas; therefore, the findings cannot be generalized to all secondary science teachers. A nationwide study is required to include teachers from diverse backgrounds. A further study can shed light on how building a professional learning community can lessen teachers’ challenges in incorporating PhET-enhanced pedagogy in their teaching.

Keywords: educational technology, inquiry-based learning, PhET interactive simulations, PhET-enhanced pedagogies, science education, science laboratory equipment, teacher professional development

Procedia PDF Downloads 94
127 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem

Authors: Rupak Kumar Sarma, Ratul Saikia

Abstract:

Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.

Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification

Procedia PDF Downloads 173
126 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube

Authors: Dan Kanmegne

Abstract:

Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.

Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification

Procedia PDF Downloads 145
125 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
124 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 279
123 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 149
122 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City

Authors: Lia Mania, Ketevan Nanobashvili

Abstract:

Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.

Keywords: oral microbiome, COVID-19, population based research, oral health indicators

Procedia PDF Downloads 69
121 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid

Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero

Abstract:

The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).

Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment

Procedia PDF Downloads 144
120 Explosive Clad Metals for Geothermal Energy Recovery

Authors: Heather Mroz

Abstract:

Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.

Keywords: clad metal, explosion welding, separator material, well casing material, piping material

Procedia PDF Downloads 154
119 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 88
118 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 149
117 Comprehensive Analysis of RNA m5C Regulator ALYREF as a Suppressive Factor of Anti-tumor Immune and a Potential Tumor Prognostic Marker in Pan-Cancer

Authors: Yujie Yuan, Yiyang Fan, Hong Fan

Abstract:

Objective: The RNA methylation recognition protein Aly/REF export factor (ALYREF) is considered one type of “reader” protein acting as a recognition protein of m5C, has been reported involved in several biological progresses including cancer initiation and progression. 5-methylcytosine (m5C) is a conserved and prevalent RNA modification in all species, as accumulating evidence suggests its role in the promotion of tumorigenesis. It has been claimed that ALYREF mediates nuclear export of mRNA with m5C modification and regulates biological effects of cancer cells. However, the systematical regulatory pathways of ALYREF in cancer tissues have not been clarified, yet. Methods: The expression level of ALYREF in pan-cancer and their normal tissues was compared through the data acquired from The Cancer Genome Atlas (TCGA). The University of Alabama at Birmingham Cancer data analysis Portal UALCAN was used to analyze the relationship between ALYREF and clinical pathological features. The relationship between the expression level of ALYREF and prognosis of pan-cancer, and the correlation genes of ALYREF were figured out by using Gene Expression Correlation Analysis database GEPIA. Immune related genes were obtained from TISIDB (an integrated repository portal for tumor-immune system interactions). Immune-related research was conducted by using Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) and TIMER. Results: Based on the data acquired from TCGA, ALYREF has an obviously higher-level expression in various types of cancers compared with relevant normal tissues excluding thyroid carcinoma and kidney chromophobe. The immunohistochemical images on The Human Protein Atlas showed that ALYREF can be detected in cytoplasm, membrane, but mainly located in nuclear. In addition, a higher expression level of ALYREF in tumor tissue generates a poor prognosis in majority of cancers. According to the above results, cancers with a higher expression level of ALYREF compared with normal tissues and a significant correlation between ALYREF and prognosis were selected for further analysis. By using TISIDB, we found that portion of ALYREF co-expression genes (such as BIRC5, H2AFZ, CCDC137, TK1, and PPM1G) with high Pearson correlation coefficient (PCC) were involved in anti-tumor immunity or affect resistance or sensitivity to T cell-mediated killing. Furthermore, based on the results acquired from GEPIA, there was significant correlation between ALYREF and PD-L1. It was exposed that there is a negative correlation between the expression level of ALYREF and ESTIMATE score. Conclusion: The present study indicated that ALYREF plays a vital and universal role in cancer initiation and progression of pan-cancer through regulating mitotic progression, DNA synthesis and metabolic process, and RNA processing. The correlation between ALYREF and PD-L1 implied ALYREF may affect the therapeutic effect of immunotherapy of tumor. More evidence revealed that ALYREF may play an important role in tumor immunomodulation. The correlation between ALYREF and immune cell infiltration level indicated that ALYREF can be a potential therapeutic target. Exploring the regulatory mechanism of ALYREF in tumor tissues may expose the reason for poor efficacy of immunotherapy and offer more directions of tumor treatment.

Keywords: ALYREF, pan-cancer, immunotherapy, PD-L1

Procedia PDF Downloads 71
116 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
115 Strategies for Drought Adpatation and Mitigation via Wastewater Management

Authors: Simrat Kaur, Fatema Diwan, Brad Reddersen

Abstract:

The unsustainable and injudicious use of natural renewable resources beyond the self-replenishment limits of our planet has proved catastrophic. Most of the Earth’s resources, including land, water, minerals, and biodiversity, have been overexploited. Owing to this, there is a steep rise in the global events of natural calamities of contrasting nature, such as torrential rains, storms, heat waves, rising sea levels, and megadroughts. These are all interconnected through common elements, namely oceanic currents and land’s the green cover. The deforestation fueled by the ‘economic elites’ or the global players have already cleared massive forests and ecological biomes in every region of the globe, including the Amazon. These were the natural carbon sinks prevailing and performing CO2 sequestration for millions of years. The forest biomes have been turned into mono cultivation farms to produce feedstock crops such as soybean, maize, and sugarcane; which are one of the biggest green house gas emitters. Such unsustainable agriculture practices only provide feedstock for livestock and food processing industries with huge carbon and water footprints. These are two main factors that have ‘cause and effect’ relationships in the context of climate change. In contrast to organic and sustainable farming, the mono-cultivation practices to produce food, fuel, and feedstock using chemicals devoid of the soil of its fertility, abstract surface, and ground waters beyond the limits of replenishment, emit green house gases, and destroy biodiversity. There are numerous cases across the planet where due to overuse; the levels of surface water reservoir such as the Lake Mead in Southwestern USA and ground water such as in Punjab, India, have deeply shrunk. Unlike the rain fed food production system on which the poor communities of the world relies; the blue water (surface and ground water) dependent mono-cropping for industrial and processed food create water deficit which put the burden on the domestic users. Excessive abstraction of both surface and ground waters for high water demanding feedstock (soybean, maize, sugarcane), cereal crops (wheat, rice), and cash crops (cotton) have a dual and synergistic impact on the global green house gas emissions and prevalence of megadroughts. Both these factors have elevated global temperatures, which caused cascading events such as soil water deficits, flash fires, and unprecedented burning of the woods, creating megafires in multiple continents, namely USA, South America, Europe, and Australia. Therefore, it is imperative to reduce the green and blue water footprints of agriculture and industrial sectors through recycling of black and gray waters. This paper explores various opportunities for successful implementation of wastewater management for drought preparedness in high risk communities.

Keywords: wastewater, drought, biodiversity, water footprint, nutrient recovery, algae

Procedia PDF Downloads 100
114 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department

Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov

Abstract:

Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.

Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology

Procedia PDF Downloads 144
113 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 94
112 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 101
111 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 89
110 Household Water Practices in a Rapidly Urbanizing City and Its Implications for the Future of Potable Water: A Case Study of Abuja Nigeria

Authors: Emmanuel Maiyanga

Abstract:

Access to sufficiently good quality freshwater has been a global challenge, but more notably in low-income countries, particularly in the Sub-Saharan countries, which Nigeria is one. Urban population is soaring, especially in many low-income countries, the existing centralised water supply infrastructures are ageing and inadequate, moreover in households peoples’ lifestyles have become more water-demanding. So, people mostly device coping strategies where municipal supply is perceived to have failed. This development threatens the futures of groundwater and calls for a review of management strategy and research approach. The various issues associated with water demand management in low-income countries and Nigeria, in particular, are well documented in the literature. However, the way people use water daily in households and the reasons they do so, and how the situation is constructing demand among the middle-class population in Abuja Nigeria is poorly understood. This is what this research aims to unpack. This is achieved by using the social practices research approach (which is based on the Theory of Practices) to understand how this situation impacts on the shared groundwater resource. A qualitative method was used for data gathering. This involved audio-recorded interviews of householders and water professionals in the private and public sectors. It also involved observation, note-taking, and document study. The data were analysed thematically using NVIVO software. The research reveals the major household practices that draw on the water on a domestic scale, and they include water sourcing, body hygiene and sanitation, laundry, kitchen, and outdoor practices (car washing, domestic livestock farming, and gardening). Among all the practices, water sourcing, body hygiene, kitchen, and laundry practices, are identified to impact most on groundwater, with impact scale varying with household peculiarities. Water sourcing practices involve people sourcing mostly from personal boreholes because the municipal water supply is perceived inadequate and unreliable in terms of service delivery and water quality, and people prefer easier and unlimited access and control using boreholes. Body hygiene practices reveal that every respondent prefers bucket bathing at least once daily, and the majority bathe twice or more every day. Frequency is determined by the feeling of hotness and dirt on the skin. Thus, people bathe to cool down, stay clean, and satisfy perceived social, religious, and hygiene demand. Kitchen practice consumes water significantly as people run the tap for vegetable washing in daily food preparation and dishwashing after each meal. Laundry practice reveals that most people wash clothes most frequently (twice in a week) during hot and dusty weather, and washing with hands in basins and buckets is the most prevalent and water wasting due to soap overdose. The research also reveals poor water governance as a major cause of current inadequate municipal water delivery. The implication poor governance and widespread use of boreholes is an uncontrolled abstraction of groundwater to satisfy desired household practices, thereby putting the future of the shared aquifer at great risk of total depletion with attendant multiplying effects on the people and the environment and population continues to soar.

Keywords: boreholes, groundwater, household water practices, self-supply

Procedia PDF Downloads 123
109 To Smile or Not to Smile: How Engendered Facial Cues affect Hiring Decisions

Authors: Sabrina S. W. Chan, Emily Schwartzman, Nicholas O. Rule

Abstract:

Past literature showed mixed findings on how smiling affects a person’s chance of getting hired. On one hand, smiling suggests enthusiasm, cooperativeness, and enthusiasm, which can elicit positive impressions. On the other hand, smiling can suggest weaker professionalism or a filler to hide nervousness, which can lower a candidate’s perceived competence. Emotion expressions can also be perceived differently depending on the person’s gender and can activate certain gender stereotypes. Women especially face a double bind with respect to hiring decisions and smiling. Because women are socially expected to smile more, those who do not smile will be considered stereotype incongruent. This becomes a noisy signal to employers and may lower their chance of being hired. However, women’s smiling as a formality may also be an obstacle. They are more likely to put on fake smiles; but if they do, they are also likely to be perceived as inauthentic and over-expressive. This paper sought to investigate how smiling affects hiring decisions, and whether this relationship is moderated by gender. In Study 1, participants were shown a series of smiling and emotionally neutral face images, incorporated into fabricated LinkedIn profiles. Participants were asked to rate how hireable they thought that candidate was. Results showed that participants rated smiling candidates as more hireable than nonsmiling candidates, and that there was no difference in gender. Moreover, individuals who did not study business were more biased in their perceptions than those who did. Since results showed a trending favoritism over female targets, in suspect of desirability bias, a second study was conducted to collect implicit measures behind the decision-making process. In Study 2, a mouse-tracking design was adopted to explore whether participants’ implicit attitudes were different from their explicit responses on hiring. Participants asked to respond whether they would offer an interview to a candidate. Findings from Study 1 was replicated in that smiling candidates received more offers than neutral-faced candidates. Results also showed that female candidates received significantly more offers than male candidates but was associated with higher attractiveness ratings. There were no significant findings in reaction time or change of decisions. However, stronger hesitation was detected for responses made towards neutral targets when participants perceived the given position as masculine, implying a conscious attempt of making situational judgments (e.g., considering candidate’s personality and job fit) to override automatic processing (evaluations based on attractiveness). Future studies would look at how these findings differ for positions which are stereotypically masculine (e.g., surgeons) and stereotypically feminine (e.g., kindergarten teachers). Current findings have strong implications for developing bias-free hiring policies in workplace, especially for organizations who maintain online/hybrid working arrangements in the post-pandemic era. This also bridges the literature gap between face perception and gender discrimination, highlighting how engendered facial cues can affect individual’s career development and organization’s success in diversity and inclusion.

Keywords: engendered facial cues, face perception, gender stereotypes, hiring decisions, smiling, workplace discrimination

Procedia PDF Downloads 133