Search results for: central processing unit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8445

Search results for: central processing unit

105 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density

Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita

Abstract:

Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.

Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite

Procedia PDF Downloads 104
104 Digital Health During a Pandemic: Critical Analysis of the COVID-19 Contact Tracing Apps

Authors: Mohanad Elemary, Imose Itua, Rajeswari B. Matam

Abstract:

Virologists and public health experts have been predicting potential pandemics from coronaviruses for decades. The viruses which caused the SARS and MERS pandemics and the Nipah virus led to many lost lives, but still, the COVID-19 pandemic caused by the SARS-CoV2 virus surprised many scientific communities, experts, and governments with its ease of transmission and its pathogenicity. Governments of various countries reacted by locking down entire populations to their homes to combat the devastation caused by the virus, which led to a loss of livelihood and economic hardship to many individuals and organizations. To revive national economies and support their citizens in resuming their lives, governments focused on the development and use of contact tracing apps as a digital way to track and trace exposure. Google and Apple introduced the Exposure Notification Systems (ENS) framework. Independent organizations and countries also developed different frameworks for contact tracing apps. The efficiency, popularity, and adoption rate of these various apps have been different across countries. In this paper, we present a critical analysis of the different contact tracing apps with respect to their efficiency, adoption rate and general perception, and the governmental strategies and policies, which led to the development of the applications. When it comes to the European countries, each of them followed an individualistic approach to the same problem resulting in different realizations of a similarly functioning application with differing results of use and acceptance. The study conducted an extensive review of existing literature, policies, and reports across multiple disciplines, from which a framework was developed and then validated through interviews with six key stakeholders in the field, including founders and executives in digital health startups and corporates as well as experts from international organizations like The World Health Organization. A framework of best practices and tactics is the result of this research. The framework looks at three main questions regarding the contact tracing apps; how to develop them, how to deploy them, and how to regulate them. The findings are based on the best practices applied by governments across multiple countries, the mistakes they made, and the best practices applied in similar situations in the business world. The findings include multiple strategies when it comes to the development milestone regarding establishing frameworks for cooperation with the private sector and how to design the features and user experience of the app for a transparent, effective, and rapidly adaptable app. For the deployment section, several tactics were discussed regarding communication messages, marketing campaigns, persuasive psychology, and the initial deployment scale strategies. The paper also discusses the data privacy dilemma and how to build for a more sustainable system of health-related data processing and utilization. This is done through principles-based regulations specific for health data to allow for its avail for the public good. This framework offers insights into strategies and tactics that could be implemented as protocols for future public health crises and emergencies whether global or regional.

Keywords: contact tracing apps, COVID-19, digital health applications, exposure notification system

Procedia PDF Downloads 139
103 Additional Opportunities of Forensic Medical Identification of Dead Bodies of Unkown Persons

Authors: Saule Mussabekova

Abstract:

A number of chemical elements widely presented in the nature is seldom met in people and vice versa. This is a peculiarity of accumulation of elements in the body, and their selective use regardless of widely changed parameters of external environment. Microelemental identification of human hair and particularly dead body is a new step in the development of modern forensic medicine which needs reliable criteria while identifying the person. In the condition of technology-related pressing of large industrial cities for many years and specific for each region multiple-factor toxic effect from many industrial enterprises it’s important to assess actuality and the role of researches of human hair while assessing degree of deposition with specific pollution. Hair is highly sensitive biological indicator and allows to assess ecological situation, to perform regionalism of large territories of geological and chemical methods. Besides, monitoring of concentrations of chemical elements in the regions of Kazakhstan gives opportunity to use these data while performing forensic medical identification of dead bodies of unknown persons. Methods based on identification of chemical composition of hair with further computer processing allowed to compare received data with average values for the sex, age, and to reveal causally significant deviations. It gives an opportunity preliminary to suppose the region of residence of the person, having concentrated actions of policy for search of people who are unaccounted for. It also allows to perform purposeful legal actions for its further identification having created more optimal and strictly individual scheme of personal identity. Hair is the most suitable material for forensic researches as it has such advances as long term storage properties with no time limitations and specific equipment. Besides, quantitative analysis of micro elements is well correlated with level of pollution of the environment, reflects professional diseases and with pinpoint accuracy helps not only to diagnose region of temporary residence of the person but to establish regions of his migration as well. Peculiarities of elemental composition of human hair have been established regardless of age and sex of persons residing on definite territories of Kazakhstan. Data regarding average content of 29 chemical elements in hair of population in different regions of Kazakhstan have been systemized. Coefficients of concentration of studies elements in hair relative to average values around the region have been calculated for each region. Groups of regions with specific spectrum of elements have been emphasized; these elements are accumulated in hair in quantities exceeding average indexes. Our results have showed significant differences in concentrations of chemical elements for studies groups and showed that population of Kazakhstan is exposed to different toxic substances. It depends on emissions to atmosphere from industrial enterprises dominating in each separate region. Performed researches have showed that obtained elemental composition of human hair residing in different regions of Kazakhstan reflects technogenic spectrum of elements.

Keywords: analysis of elemental composition of hair, forensic medical research of hair, identification of unknown dead bodies, microelements

Procedia PDF Downloads 142
102 From Faces to Feelings: Exploring Emotional Contagion and Empathic Accuracy through the Enfacement Illusion

Authors: Ilenia Lanni, Claudia Del Gatto, Allegra Indraccolo, Riccardo Brunetti

Abstract:

Empathy represents a multifaceted construct encompassing affective and cognitive components. Among these, empathic accuracy—defined as the ability to accurately infer another person’s emotions or mental state—plays a pivotal role in fostering empathetic understanding. Emotional contagion, the automatic process through which individuals mimic and synchronize facial expressions, vocalizations, and postures, is considered a foundational mechanism for empathy. This embodied simulation enables shared emotional experiences and facilitates the recognition of others’ emotional states, forming the basis of empathic accuracy. Facial mimicry, an integral part of emotional contagion, creates a physical and emotional resonance with others, underscoring its potential role in enhancing empathic understanding. Building on these findings, the present study explores how manipulating emotional contagion through the enfacement illusion impacts empathic accuracy, particularly in the recognition of complex emotional expressions. The enfacement illusion was implemented as a visuo-tactile multisensory manipulation, during which participants experienced synchronous and spatially congruent tactile stimulation on their own face while observing the same stimulation being applied to another person’s face. This manipulation enhances facial mimicry, which is hypothesized to play a key role in improving empathic accuracy. Following the enfacement illusion, participants completed a modified version of the Diagnostic Analysis of Nonverbal Accuracy–Form 2 (DANVA2-AF). The task included 48 images of adult faces expressing happiness, sadness, or morphed emotions blending neutral with happiness or sadness to increase recognition difficulty. These images featured both familiar and unfamiliar faces, with familiar faces belonging to the actors involved in the prior visuo-tactile stimulation. Participants were required to identify the target’s emotional state as either "happy" or "sad," with response accuracy and reaction times recorded. Results from this study indicate that emotional contagion, as manipulated through the enfacement illusion, significantly enhances empathic accuracy, particularly for the recognition of happiness. Participants demonstrated greater accuracy and faster response times in identifying happiness when viewing familiar faces compared to unfamiliar ones. These findings suggest that the enfacement illusion strengthens emotional resonance and facilitates the processing of positive emotions, which are inherently more likely to be shared and mimicked. Conversely, for the recognition of sadness, an opposite but non-significant trend was observed. Specifically, participants were slightly faster at recognizing sadness in unfamiliar faces compared to familiar ones. This pattern suggests potential differences in how positive and negative emotions are processed within the context of facial mimicry and emotional contagion, warranting further investigation. These results provide insights into the role of facial mimicry in emotional contagion and its selective impact on empathic accuracy. This study highlights how the enfacement illusion can precisely modulate the recognition of specific emotions, offering a deeper understanding of the mechanisms underlying empathy.

Keywords: empathy, emotional contagion, enfacement illusion, emotion recognition

Procedia PDF Downloads 12
101 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour

Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira

Abstract:

Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.

Keywords: child behaviour, functional connectivity, imaging, Shannon entropy

Procedia PDF Downloads 204
100 Membrane Technologies for Obtaining Bioactive Fractions from Blood Main Protein: An Exploratory Study for Industrial Application

Authors: Fatima Arrutia, Francisco Amador Riera

Abstract:

The meat industry generates large volumes of blood as a result of meat processing. Several industrial procedures have been implemented in order to treat this by-product, but are focused on the production of low-value products, and in many cases, blood is simply discarded as waste. Besides, in addition to economic interests, there is an environmental concern due to bloodborne pathogens and other chemical contaminants found in blood. Consequently, there is a dire need to find extensive uses for blood that can be both applicable to industrial scale and able to yield high value-added products. Blood has been recognized as an important source of protein. The main blood serum protein in mammals is serum albumin. One of the top trends in food market is functional foods. Among them, bioactive peptides can be obtained from protein sources by microbiological fermentation or enzymatic and chemical hydrolysis. Bioactive peptides are short amino acid sequences that can have a positive impact on health when administered. The main drawback for bioactive peptide production is the high cost of the isolation, purification and characterization techniques (such as chromatography and mass spectrometry) that make unaffordable the scale-up. On the other hand, membrane technologies are very suitable to apply to the industry because they offer a very easy scale-up and are low-cost technologies, compared to other traditional separation methods. In this work, the possibility of obtaining bioactive peptide fractions from serum albumin by means of a simple procedure of only 2 steps (hydrolysis and membrane filtration) was evaluated, as an exploratory study for possible industrial application. The methodology used in this work was, firstly, a tryptic hydrolysis of serum albumin in order to release the peptides from the protein. The protein was previously subjected to a thermal treatment in order to enhance the enzyme cleavage and thus the peptide yield. Then, the obtained hydrolysate was filtered through a nanofiltration/ultrafiltration flat rig at three different pH values with two different membrane materials, so as to compare membrane performance. The corresponding permeates were analyzed by liquid chromatography-tandem mass spectrometry technology in order to obtain the peptide sequences present in each permeate. Finally, different concentrations of every permeate were evaluated for their in vitro antihypertensive and antioxidant activities though ACE-inhibition and DPPH radical scavenging tests. The hydrolysis process with the previous thermal treatment allowed achieving a degree of hydrolysis of the 49.66% of the maximum possible. It was found that peptides were best transmitted to the permeate stream at pH values that corresponded to their isoelectric points. Best selectivity between peptide groups was achieved at basic pH values. Differences in peptide content were found between membranes and also between pH values for the same membrane. The antioxidant activity of all permeates was high compared with the control only for the highest dose. However, antihypertensive activity was best for intermediate concentrations, rather than higher or lower doses. Therefore, although differences between them, all permeates were promising regarding antihypertensive and antioxidant properties.

Keywords: bioactive peptides, bovine serum albumin, hydrolysis, membrane filtration

Procedia PDF Downloads 200
99 Method of Nursing Education: History Review

Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán

Abstract:

Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.

Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education

Procedia PDF Downloads 116
98 Ethanolamine Detection with Composite Films

Authors: S. A. Krutovertsev, A. E. Tarasova, L. S. Krutovertseva, O. M. Ivanova

Abstract:

The aim of the work was to get stable sensitive films with good sensitivity to ethanolamine (C2H7NO) in air. Ethanolamine is used as adsorbent in different processes of gas purification and separation. Besides it has wide industrial application. Chemical sensors of sorption type are widely used for gas analysis. Their behavior is determined by sensor characteristics of sensitive sorption layer. Forming conditions and characteristics of chemical gas sensors based on nanostructured modified silica films activated by different admixtures have been studied. As additives molybdenum containing polyoxometalates of the eighteen series were incorporated in silica films. The method of hydrolythic polycondensation from tetraethyl orthosilicate solutions was used for forming such films in this work. The method’s advantage is a possibility to introduce active additives directly into an initial solution. This method enables to obtain sensitive thin films with high specific surface at room temperature. Particular properties make polyoxometalates attractive as active additives for forming of gas-sensitive films. As catalyst of different redox processes, they can either accelerate the reaction of the matrix with analyzed gas or interact with it, and it results in changes of matrix’s electrical properties Polyoxometalates based films were deposited on the test structures manufactured by microelectronic planar technology with interdigitated electrodes. Modified silica films were deposited by a casting method from solutions based on tetraethyl orthosilicate and polyoxometalates. Polyoxometalates were directly incorporated into initial solutions. Composite nanostructured films were deposited by drop casting method on test structures with a pair of interdigital metal electrodes formed at their surface. The sensor’s active area was 4.0 x 4.0 mm, and electrode gap was egual 0.08 mm. Morphology of the layers surface were studied with Solver-P47 scanning probe microscope (NT-MDT, Russia), the infrared spectra were investigated by a Bruker EQUINOX 55 (Germany). The conditions of film formation varied during the tests. Electrical parameters of the sensors were measured electronically in real-time mode. Films had highly developed surface with value of 450 m2/g and nanoscale pores. Thickness of them was 0,2-0,3 µm. The study shows that the conditions of the environment affect markedly the sensors characteristics, which can be improved by choosing of the right procedure of forming and processing. Addition of polyoxometalate into silica film resulted in stabilization of film mass and changed markedly of electrophysical characteristics. Availability of Mn3P2Mo18O62 into silica film resulted in good sensitivity and selectivity to ethanolamine. Sensitivity maximum was observed at weight content of doping additive in range of 30–50% in matrix. With ethanolamine concentration changing from 0 to 100 ppm films’ conductivity increased by 10-12 times. The increase of sensor’s sensitivity was received owing to complexing reaction of tested substance with cationic part of polyoxometalate. This fact results in intramolecular redox reaction which sharply change electrophysical properties of polyoxometalate. This process is reversible and takes place at room temperature.

Keywords: ethanolamine, gas analysis, polyoxometalate, silica film

Procedia PDF Downloads 212
97 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 121
96 Acute Severe Hyponatremia in Patient with Psychogenic Polydipsia, Learning Disability and Epilepsy

Authors: Anisa Suraya Ab Razak, Izza Hayat

Abstract:

Introduction: The diagnosis and management of severe hyponatremia in neuropsychiatric patients present a significant challenge to physicians. Several factors contribute, including diagnostic shadowing and attributing abnormal behavior to intellectual disability or psychiatric conditions. Hyponatraemia is the commonest electrolyte abnormality in the inpatient population, ranging from mild/asymptomatic, moderate to severe levels with life-threatening symptoms such as seizures, coma and death. There are several documented fatal case reports in the literature of severe hyponatremia secondary to psychogenic polydipsia, often diagnosed only in autopsy. This paper presents a case study of acute severe hyponatremia in a neuropsychiatric patient with early diagnosis and admission to intensive care. Case study: A 21-year old Caucasian male with known epilepsy and learning disability was admitted from residential living with generalized tonic-clonic self-terminating seizures after refusing medications for several weeks. Evidence of superficial head injury was detected on physical examination. His laboratory data demonstrated mild hyponatremia (125 mmol/L). Computed tomography imaging of his brain demonstrated no acute bleed or space-occupying lesion. He exhibited abnormal behavior - restlessness, drinking water from bathroom taps, inability to engage, paranoia, and hypersexuality. No collateral history was available to establish his baseline behavior. He was loaded with intravenous sodium valproate and leveritircaetam. Three hours later, he developed vomiting and a generalized tonic-clonic seizure lasting forty seconds. He remained drowsy for several hours and regained minimal recovery of consciousness. A repeat set of blood tests demonstrated profound hyponatremia (117 mmol/L). Outcomes: He was referred to intensive care for peripheral intravenous infusion of 2.7% sodium chloride solution with two-hourly laboratory monitoring of sodium concentration. Laboratory monitoring identified dangerously rapid correction of serum sodium concentration, and hypertonic saline was switched to a 5% dextrose solution to reduce the risk of acute large-volume fluid shifts from the cerebral intracellular compartment to the extracellular compartment. He underwent urethral catheterization and produced 8 liters of urine over 24 hours. Serum sodium concentration remained stable after 24 hours of correction fluids. His GCS recovered to baseline after 48 hours with improvement in behavior -he engaged with healthcare professionals, understood the importance of taking medications, admitted to illicit drug use and drinking massive amounts of water. He was transferred from high-dependency care to ward level and was initiated on multiple trials of anti-epileptics before achieving seizure-free days two weeks after resolution of acute hyponatremia. Conclusion: Psychogenic polydipsia is often found in young patients with intellectual disability or psychiatric disorders. Patients drink large volumes of water daily ranging from ten to forty liters, resulting in acute severe hyponatremia with mortality rates as high as 20%. Poor outcomes are due to challenges faced by physicians in making an early diagnosis and treating acute hyponatremia safely. A low index of suspicion of water intoxication is required in this population, including patients with known epilepsy. Monitoring urine output proved to be clinically effective in aiding diagnosis. Early referral and admission to intensive care should be considered for safe correction of sodium concentration while minimizing risk of fatal complications e.g. central pontine myelinolysis.

Keywords: epilepsy, psychogenic polydipsia, seizure, severe hyponatremia

Procedia PDF Downloads 123
95 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 172
94 Modern Day Second Generation Military Filipino Amerasians and Ghosts of the U.S. Military Prostitution System in West Central Luzon's 'AMO Amerasian Triangle'

Authors: P. C. Kutschera, Elena C. Tesoro, Mary Grace Talamera-Sandico, Jose Maria G. Pelayo III

Abstract:

Second generation military Filipino Amerasians comprise a formidable contemporary segment of the estimated 250,000-plus biracial Amerasians in the Philippines today. Overall, they are a stigmatized and socioeconomically marginalized diaspora, historically; they were abandoned or estranged by U.S. military personnel fathers assigned during the century-long Colonial, Post-World War II and Cold War Era of permanent military basing (1898-1992). Indeed, U.S. military personnel remain stationed in smaller numbers in the Philippines today. This inquiry is an outgrowth of two recent small sample studies. The first surfaced the impact of the U.S. military prostitution system on formation of the ‘Derivative Amerasian Family Construct’ on first generation Amerasians; a second, qualitative case study suggested the continued effect of the prostitution systems' destructive impetuous on second generation Amerasians. The intent of this current qualitative, multiple-case study was to actively seek out second generation sex industry toilers. The purpose was to focus further on this human phenomenon in the post-basing and post-military prostitution system eras. As background, the former military prostitution apparatus has transformed into a modern dynamic of rampant sex tourism and prostitution nationwide. This is characterized by hotel and resorts offering unrestricted carnal access, urban and provincial brothels (casas), discos, bars and pickup clubs, massage parlors, local barrio karaoke bars and street prostitution. A small case study sample (N = 4) of female and male second generation Amerasians were selected. Sample formation employed a non-probability ‘snowball’ technique drawing respondents from the notorious Angeles, Metro Manila, Olongapo City ‘AMO Amerasian Triangle’ where most former U.S. military installations were sited and modern sex tourism thrives. A six-month study and analysis of in-depth interviews of female and male sex laborers, their families and peers revealed a litany of disturbing, and troublesome experiences. Results showed profiles of debilitating human poverty, history of family disorganization, stigmatization, social marginalization and the ghost of the military prostitution system and its harmful legacy on Amerasian family units. Emerging were testimonials of wayward young people ensnared in a maelstrom of deep economic deprivation, familial dysfunction, psychological desperation and societal indifference. The paper recommends that more study is needed and implications of unstudied psychosocial and socioeconomic experiences of distressed younger generations of military Amerasians require specific research. Heretofore apathetic or disengaged U.S. institutions need to confront the issue and formulate activist and solution-oriented social welfare, human services and immigration easement policies and alternatives. These institutions specifically include academic and social science research agencies, corporate foundations, the U.S. Congress, and Departments of State, Defense and Health and Human Services, and Homeland Security (i.e. Citizen and Immigration Services) It is them who continue to endorse a laissez-faire policy of non-involvement over the entire Filipino Amerasian question. Such apathy, the paper concludes, relegates this consequential but neglected blood progeny to the status of humiliating destitution and exploitation. Amerasians; thus, remain entrapped in their former colonial, and neo-colonial habitat. Ironically, they are unwitting victims of a U.S. American homeland that fancies itself geo-politically as a strong and strategic military treaty ally of the Philippines in the Western Pacific.

Keywords: Asian Americans, diaspora, Filipino Amerasians, military prostitution, stigmatization

Procedia PDF Downloads 489
93 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 280
92 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 151
91 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City

Authors: Lia Mania, Ketevan Nanobashvili

Abstract:

Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.

Keywords: oral microbiome, COVID-19, population based research, oral health indicators

Procedia PDF Downloads 70
90 Explosive Clad Metals for Geothermal Energy Recovery

Authors: Heather Mroz

Abstract:

Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.

Keywords: clad metal, explosion welding, separator material, well casing material, piping material

Procedia PDF Downloads 154
89 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 88
88 Comprehensive Analysis of RNA m5C Regulator ALYREF as a Suppressive Factor of Anti-tumor Immune and a Potential Tumor Prognostic Marker in Pan-Cancer

Authors: Yujie Yuan, Yiyang Fan, Hong Fan

Abstract:

Objective: The RNA methylation recognition protein Aly/REF export factor (ALYREF) is considered one type of “reader” protein acting as a recognition protein of m5C, has been reported involved in several biological progresses including cancer initiation and progression. 5-methylcytosine (m5C) is a conserved and prevalent RNA modification in all species, as accumulating evidence suggests its role in the promotion of tumorigenesis. It has been claimed that ALYREF mediates nuclear export of mRNA with m5C modification and regulates biological effects of cancer cells. However, the systematical regulatory pathways of ALYREF in cancer tissues have not been clarified, yet. Methods: The expression level of ALYREF in pan-cancer and their normal tissues was compared through the data acquired from The Cancer Genome Atlas (TCGA). The University of Alabama at Birmingham Cancer data analysis Portal UALCAN was used to analyze the relationship between ALYREF and clinical pathological features. The relationship between the expression level of ALYREF and prognosis of pan-cancer, and the correlation genes of ALYREF were figured out by using Gene Expression Correlation Analysis database GEPIA. Immune related genes were obtained from TISIDB (an integrated repository portal for tumor-immune system interactions). Immune-related research was conducted by using Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) and TIMER. Results: Based on the data acquired from TCGA, ALYREF has an obviously higher-level expression in various types of cancers compared with relevant normal tissues excluding thyroid carcinoma and kidney chromophobe. The immunohistochemical images on The Human Protein Atlas showed that ALYREF can be detected in cytoplasm, membrane, but mainly located in nuclear. In addition, a higher expression level of ALYREF in tumor tissue generates a poor prognosis in majority of cancers. According to the above results, cancers with a higher expression level of ALYREF compared with normal tissues and a significant correlation between ALYREF and prognosis were selected for further analysis. By using TISIDB, we found that portion of ALYREF co-expression genes (such as BIRC5, H2AFZ, CCDC137, TK1, and PPM1G) with high Pearson correlation coefficient (PCC) were involved in anti-tumor immunity or affect resistance or sensitivity to T cell-mediated killing. Furthermore, based on the results acquired from GEPIA, there was significant correlation between ALYREF and PD-L1. It was exposed that there is a negative correlation between the expression level of ALYREF and ESTIMATE score. Conclusion: The present study indicated that ALYREF plays a vital and universal role in cancer initiation and progression of pan-cancer through regulating mitotic progression, DNA synthesis and metabolic process, and RNA processing. The correlation between ALYREF and PD-L1 implied ALYREF may affect the therapeutic effect of immunotherapy of tumor. More evidence revealed that ALYREF may play an important role in tumor immunomodulation. The correlation between ALYREF and immune cell infiltration level indicated that ALYREF can be a potential therapeutic target. Exploring the regulatory mechanism of ALYREF in tumor tissues may expose the reason for poor efficacy of immunotherapy and offer more directions of tumor treatment.

Keywords: ALYREF, pan-cancer, immunotherapy, PD-L1

Procedia PDF Downloads 71
87 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
86 Strategies for Drought Adpatation and Mitigation via Wastewater Management

Authors: Simrat Kaur, Fatema Diwan, Brad Reddersen

Abstract:

The unsustainable and injudicious use of natural renewable resources beyond the self-replenishment limits of our planet has proved catastrophic. Most of the Earth’s resources, including land, water, minerals, and biodiversity, have been overexploited. Owing to this, there is a steep rise in the global events of natural calamities of contrasting nature, such as torrential rains, storms, heat waves, rising sea levels, and megadroughts. These are all interconnected through common elements, namely oceanic currents and land’s the green cover. The deforestation fueled by the ‘economic elites’ or the global players have already cleared massive forests and ecological biomes in every region of the globe, including the Amazon. These were the natural carbon sinks prevailing and performing CO2 sequestration for millions of years. The forest biomes have been turned into mono cultivation farms to produce feedstock crops such as soybean, maize, and sugarcane; which are one of the biggest green house gas emitters. Such unsustainable agriculture practices only provide feedstock for livestock and food processing industries with huge carbon and water footprints. These are two main factors that have ‘cause and effect’ relationships in the context of climate change. In contrast to organic and sustainable farming, the mono-cultivation practices to produce food, fuel, and feedstock using chemicals devoid of the soil of its fertility, abstract surface, and ground waters beyond the limits of replenishment, emit green house gases, and destroy biodiversity. There are numerous cases across the planet where due to overuse; the levels of surface water reservoir such as the Lake Mead in Southwestern USA and ground water such as in Punjab, India, have deeply shrunk. Unlike the rain fed food production system on which the poor communities of the world relies; the blue water (surface and ground water) dependent mono-cropping for industrial and processed food create water deficit which put the burden on the domestic users. Excessive abstraction of both surface and ground waters for high water demanding feedstock (soybean, maize, sugarcane), cereal crops (wheat, rice), and cash crops (cotton) have a dual and synergistic impact on the global green house gas emissions and prevalence of megadroughts. Both these factors have elevated global temperatures, which caused cascading events such as soil water deficits, flash fires, and unprecedented burning of the woods, creating megafires in multiple continents, namely USA, South America, Europe, and Australia. Therefore, it is imperative to reduce the green and blue water footprints of agriculture and industrial sectors through recycling of black and gray waters. This paper explores various opportunities for successful implementation of wastewater management for drought preparedness in high risk communities.

Keywords: wastewater, drought, biodiversity, water footprint, nutrient recovery, algae

Procedia PDF Downloads 102
85 Via ad Reducendam Intensitatem Energiae Industrialis in Provincia Sino ad Conservationem Energiae

Authors: John Doe

Abstract:

This paper presents the research project “Escape Through Culture”, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020 and the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The project implementation is assumed by three partners, (1) the Computer Technology Institute and Press "Diophantus" (CTI), experienced with the design and implementation of serious games, natural language processing and ICT in education, (2) the Laboratory of Environmental Communication and Audiovisual Documentation (LECAD), part of the University of Thessaly, Department of Architecture, which is experienced with the study of creative transformation and reframing of the urban and environmental multimodal experiences through the use of AR and VR technologies, and (3) “Apoplou”, an IT Company with experience in the implementation of interactive digital applications. The research project proposes the design of innovative infrastructure of digital educational escape games for mobile devices and computers, with the use of Virtual Reality and Augmented Reality for the promotion of Greek cultural heritage in Greece and abroad. In particular, the project advocates the combination of Greek cultural heritage and literature, digital technologies advancements and the implementation of innovative gamifying practices. The cultural experience of the players will take place in 3 layers: (1) In space: the digital games produced are going to utilize the dual character of the space as a cultural landscape (the real space - landscape but also the space - landscape as presented with the technologies of augmented reality and virtual reality). (2) In literary texts: the selected texts of Greek writers will support the sense of place and the multi-sensory involvement of the user, through the context of space-time, language and cultural characteristics. (3) In the philosophy of the "escape game" tool: whether played in a computer environment, indoors or outdoors, the spatial experience is one of the key components of escape games. The innovation of the project lies both in the junction of Augmented/Virtual Reality with the promotion of cultural points of interest, as well as in the interactive, gamified practices of literary texts. The digital escape game infrastructure will be highly interactive, integrating the projection of Greek landscape cultural elements and digital literary text analysis, supporting the creation of escape games, establishing and highlighting new playful ways of experiencing iconic cultural places, such as Elefsina, Skiathos etc. The literary texts’ content will relate to specific elements of the Greek cultural heritage depicted by prominent Greek writers and poets. The majority of the texts will originate from Greek educational content available in digital libraries and repositories developed and maintained by CTI. The escape games produced will be available for use during educational field trips, thematic tourism holidays, etc. In this paper, the methodology adopted for infrastructure development will be presented. The research is based on theories of place, gamification, gaming development, making use of corpus linguistics concepts and digital humanities practices for the compilation and the analysis of literary texts.

Keywords: escape games, cultural landscapes, gamification, digital humanities, literature

Procedia PDF Downloads 248
84 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 95
83 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis

Authors: Kimberly Samaha

Abstract:

In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.

Keywords: bio-economy, investment risk, circular design, economic modelling

Procedia PDF Downloads 101
82 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 93
81 To Smile or Not to Smile: How Engendered Facial Cues affect Hiring Decisions

Authors: Sabrina S. W. Chan, Emily Schwartzman, Nicholas O. Rule

Abstract:

Past literature showed mixed findings on how smiling affects a person’s chance of getting hired. On one hand, smiling suggests enthusiasm, cooperativeness, and enthusiasm, which can elicit positive impressions. On the other hand, smiling can suggest weaker professionalism or a filler to hide nervousness, which can lower a candidate’s perceived competence. Emotion expressions can also be perceived differently depending on the person’s gender and can activate certain gender stereotypes. Women especially face a double bind with respect to hiring decisions and smiling. Because women are socially expected to smile more, those who do not smile will be considered stereotype incongruent. This becomes a noisy signal to employers and may lower their chance of being hired. However, women’s smiling as a formality may also be an obstacle. They are more likely to put on fake smiles; but if they do, they are also likely to be perceived as inauthentic and over-expressive. This paper sought to investigate how smiling affects hiring decisions, and whether this relationship is moderated by gender. In Study 1, participants were shown a series of smiling and emotionally neutral face images, incorporated into fabricated LinkedIn profiles. Participants were asked to rate how hireable they thought that candidate was. Results showed that participants rated smiling candidates as more hireable than nonsmiling candidates, and that there was no difference in gender. Moreover, individuals who did not study business were more biased in their perceptions than those who did. Since results showed a trending favoritism over female targets, in suspect of desirability bias, a second study was conducted to collect implicit measures behind the decision-making process. In Study 2, a mouse-tracking design was adopted to explore whether participants’ implicit attitudes were different from their explicit responses on hiring. Participants asked to respond whether they would offer an interview to a candidate. Findings from Study 1 was replicated in that smiling candidates received more offers than neutral-faced candidates. Results also showed that female candidates received significantly more offers than male candidates but was associated with higher attractiveness ratings. There were no significant findings in reaction time or change of decisions. However, stronger hesitation was detected for responses made towards neutral targets when participants perceived the given position as masculine, implying a conscious attempt of making situational judgments (e.g., considering candidate’s personality and job fit) to override automatic processing (evaluations based on attractiveness). Future studies would look at how these findings differ for positions which are stereotypically masculine (e.g., surgeons) and stereotypically feminine (e.g., kindergarten teachers). Current findings have strong implications for developing bias-free hiring policies in workplace, especially for organizations who maintain online/hybrid working arrangements in the post-pandemic era. This also bridges the literature gap between face perception and gender discrimination, highlighting how engendered facial cues can affect individual’s career development and organization’s success in diversity and inclusion.

Keywords: engendered facial cues, face perception, gender stereotypes, hiring decisions, smiling, workplace discrimination

Procedia PDF Downloads 135
80 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System

Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski

Abstract:

Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.

Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals

Procedia PDF Downloads 377
79 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 96
78 Wheat Cluster Farming Approach: Challenges and Prospects for Smallholder Farmers in Ethiopia

Authors: Hanna Mamo Ergando

Abstract:

Climate change is already having a severe influence on agriculture, affecting crop yields, the nutritional content of main grains, and livestock productivity. Significant adaptation investments will be necessary to sustain existing yields and enhance production and food quality to fulfill demand. Climate-smart agriculture (CSA) provides numerous potentials in this regard, combining a focus on enhancing agricultural output and incomes while also strengthening resilience and responding to climate change. To improve agriculture production and productivity, the Ethiopian government has adopted and implemented a series of strategies, including the recent agricultural cluster farming that is practiced as an effort to change, improve, and transform subsistence farming to modern, productive, market-oriented, and climate-smart approach through farmers production cluster. Besides, greater attention and focus have been given to wheat production and productivity by the government, and wheat is the major crop grown in cluster farming. Therefore, the objective of this assessment was to examine various opportunities and challenges farmers face in a cluster farming system. A qualitative research approach was used to generate primary and secondary data. Respondents were chosen using the purposeful sampling technique. Accordingly, experts from the Federal Ministry of Agriculture, the Ethiopian Agricultural Transformation Institute, the Ethiopian Agricultural Research Institute, and the Ethiopian Environment Protection Authority were interviewed. The assessment result revealed that farming in clusters is an economically viable technique for sustaining small, resource-limited, and socially disadvantaged farmers' agricultural businesses. The method assists farmers in consolidating their products and delivering them in bulk to save on transportation costs while increasing income. Smallholders' negotiating power has improved as a result of cluster membership, as has knowledge and information spillover. The key challenges, on the other hand, were identified as a lack of timely provision of modern inputs, insufficient access to credit services, conflict of interest in crop selection, and a lack of output market for agro-processing firms. Furthermore, farmers in the cluster farming approach grow wheat year after year without crop rotation or diversification techniques. Mono-cropping has disadvantages because it raises the likelihood of disease and insect outbreaks. This practice may result in long-term consequences, including soil degradation, reduced biodiversity, and economic risk for farmers. Therefore, the government must devote more resources to addressing the issue of environmental sustainability. Farmers' access to complementary services that promote production and marketing efficiencies through infrastructure and institutional services has to be improved. In general, the assessment begins with some hint that leads to a deeper study into the efficiency of the strategy implementation, upholding existing policy, and scaling up good practices in a sustainable and environmentally viable manner.

Keywords: cluster farming, smallholder farmers, wheat, challenges, opportunities

Procedia PDF Downloads 227
77 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 242
76 Investigation of Pu-238 Heat Source Modifications to Increase Power Output through (α,N) Reaction-Induced Fission

Authors: Alex B. Cusick

Abstract:

The objective of this study is to improve upon the current ²³⁸PuO₂ fuel technology for space and defense applications. Modern RTGs (radioisotope thermoelectric generators) utilize the heat generated from the radioactive decay of ²³⁸Pu to create heat and electricity for long term and remote missions. Application of RTG technology is limited by the scarcity and expense of producing the isotope, as well as the power output which is limited to only a few hundred watts. The scarcity and expense make the efficient use of ²³⁸Pu absolutely necessary. By utilizing the decay of ²³⁸Pu, not only to produce heat directly but to also indirectly induce fission in ²³⁹Pu (which is already present within currently used fuel), it is possible to see large increases in temperature which allows for a more efficient conversion to electricity and a higher power-to-weight ratio. This concept can reduce the quantity of ²³⁸Pu necessary for these missions, potentially saving millions on investment, while yielding higher power output. Current work investigating radioisotope power systems have focused on improving efficiency of the thermoelectric components and replacing systems which produce heat by virtue of natural decay with fission reactors. The technical feasibility of utilizing (α,n) reactions to induce fission within current radioisotopic fuels has not been investigated in any appreciable detail, and our study aims to thoroughly investigate the performance of many such designs, develop those with highest capabilities, and facilitate experimental testing of these designs. In order to determine the specific design parameters that maximize power output and the efficient use of ²³⁸Pu for future RTG units, MCNP6 simulations have been used to characterize the effects of modifying fuel composition, geometry, and porosity, as well as introducing neutron moderating, reflecting, and shielding materials to the system. Although this project is currently in the preliminary stages, the final deliverables will include sophisticated designs and simulation models that define all characteristics of multiple novel RTG fuels, detailed enough to allow immediate fabrication and testing. Preliminary work has consisted of developing a benchmark model to accurately represent the ²³⁸PuO₂ pellets currently in use by NASA; this model utilizes the alpha transport capabilities of MCNP6 and agrees well with experimental data. In addition, several models have been developed by varying specific parameters to investigate their effect on (α,n) and (n,fi ssion) reaction rates. Current practices in fuel processing are to exchange out the small portion of naturally occurring ¹⁸O and ¹⁷O to limit (α,n) reactions and avoid unnecessary neutron production. However, we have shown that enriching the oxide in ¹⁸O introduces a sufficient (α,n) reaction rate to support significant fission rates. For example, subcritical fission rates above 10⁸ f/cm³-s are easily achievable in cylindrical ²³⁸PuO₂ fuel pellets with a ¹⁸O enrichment of 100%, given an increase in size and a ⁹Be clad. Many viable designs exist and our intent is to discuss current results and future endeavors on this project.

Keywords: radioisotope thermoelectric generators (RTG), Pu-238, subcritical reactors, (alpha, n) reactions

Procedia PDF Downloads 173