Search results for: analytical validation
415 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth
Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.Keywords: treeline, dynamic, climate, modeling
Procedia PDF Downloads 83414 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 526413 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health
Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz
Abstract:
COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia
Procedia PDF Downloads 91412 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach
Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh
Abstract:
Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling
Procedia PDF Downloads 41411 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic
Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova
Abstract:
Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification
Procedia PDF Downloads 108410 Development of an Instrument Assessing Participants’ Motivation on Assigning Monetary Value to Quality of Life
Authors: Afentoula Mavrodi, Andreas Georgiou, Georgios Tsiotras, Vassilis Aletras
Abstract:
Placing a monetary value on a quality-adjusted-life-year (QALY) is of utmost importance in economic evaluation. Identifying the population’s preferences is critical in order to understand some of the reasons driving variations in the assigned monetary value. Yet, evidence of the motives behind value assignment to a QALY by the general public is limited. Developing an instrument that would capture the population’s motives could be proven valuable to policy-makers, to guide them in allocating different values to a QALY based on users’ motivations. The aim of this study was to identify the most relevant motives and develop an appropriate instrument to assess them. To design the instrument, we employed: a) the EQ-5D-3L tool to assess participants’ current health status, and b) the Willingness-to-Pay (WTP) approach, within the Contingent Valuation (CV) Method framework, to elicit the monetary value. Advancing the open-ended approach adopted to assess solely protest bidders’ motives; a variety of follow-up item-specific statements were designed (deductive approach), aiming to evaluate motives of both protest bidders and participants willing to pay for the hypothetical treatment under consideration. The initial design of the survey instrument was the outcome of an extensive literature review. This instrument was revised based on 15 semi-structured interviews that took place in September 2018 and a pilot study held during two months (October-November) in 2018. Individuals with different educational, occupational and economical backgrounds and adequate verbal skills were recruited to complete the semi-structured interviews. The follow-up motivation statements of both protest bidders and those willing to pay were revised and rephrased after the semi-structured interviews. In total 4 statements for protest bidders and 3 statements for those willing to pay for the treatment were chosen to be included in the survey tool. Using the CATI (Computer Assisted Telephone Interview) method, a randomly selected sample of 97 persons living in Thessaloniki, Greece, completed the questionnaire on two occasions over a period of 4 weeks. Based on pilot study results, a test-retest reliability assessment was performed using the intra-class correlation coefficient (ICC). All statements formulated for protest bidders showed acceptable reliability (ICC values of 0.84 (95% CI: 0.67, 0.92) and above). Similarly, all statements for those willing to pay for the treatment showed high reliability (ICC values of 0.86 (95% CI: 0.78, 0.91) and above). Overall, the instrument designed in this study was reliable with regards to the item-specific statements assessing participants’ motivation. Validation of the instrument will take place in a future study. For a holistic WTP per QALY instrument, participants’ motivation must be addressed broadly. The instrument developed in this study captured a variety of motives and provided insight with regards to the method through which the latter are evaluated. Last but not least, it extended motive assessment to all study participants and not only protest bidders.Keywords: contingent valuation method, instrument, motives, quality-adjusted life-year, willingness-to-pay
Procedia PDF Downloads 136409 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 76408 Urban Compactness and Sustainability: Beijing Experience
Authors: Xilu Liu, Ameen Farooq
Abstract:
Beijing has several compact residential housing settings in many of its urban districts. The study in this paper reveals that urban compactness, as predictor of density, may carry an altogether different meaning in the developing world when compared to the U.S for achieving objectives of urban sustainability. Recent urban design studies in the U.S are debating for compact and mixed-use higher density housing to achieve sustainable and energy efficient living environments. While the concept of urban compactness is widely accepted as an approach in modern architectural and urban design fields, this belief may not directly carry well into all areas within cities of developing countries. Beijing’s technology-driven economy, with its historic and rich cultural heritage and a highly speculated real-estate market, extends its urban boundaries into multiple compact urban settings of varying scales and densities. The accelerated pace of migration from the countryside for better opportunities has led to unsustainable and uncontrolled buildups in order to meet the growing population demand within and outside of the urban center. This unwarranted compactness in certain urban zones has produced an unhealthy physical density with serious environmental and ecological challenging basic living conditions. In addition, crowding, traffic congestion, pollution and limited housing surrounding this compactness is a threat to public health. Several residential blocks in close proximity to each other were found quite compacted, or ill-planned, with residential sites due to lack of proper planning in Beijing. Most of them at first sight appear to be compact and dense but further analytical studies revealed that what appear to be dense actually are not as dense as to make a good case that could serve as the corner stone of sustainability and energy efficiency. This study considered several factors including floor area ratio (FAR), ground coverage (GSI), open space ratio (OSR) as indicators in analyzing urban compactness as a predictor of density. The findings suggest that these measures, influencing the density of residential sites under study, were much smaller in density than expected given their compact adjacencies. Further analysis revealed that several residential housing appear to support the notion of density in its compact layout but are actually compacted due to unregulated planning marred by lack of proper urban design standards, policies and guidelines specific to their urban context and condition.Keywords: Beijing, density, sustainability, urban compactness
Procedia PDF Downloads 424407 Relationship of Macro-Concepts in Educational Technologies
Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez
Abstract:
This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language
Procedia PDF Downloads 199406 Investigation on Behaviour of Reinforced Concrete Beam-Column Joints Retrofitted with CFRP
Authors: Ehsan Mohseni
Abstract:
The aim of this thesis is to provide numerical analyses of reinforced concrete beams-column joints with/without CFRP (Carbon Fiber Reinforced Polymer) in order to achieve a better understanding of the behaviour of strengthened beamcolumn joints. A comprehensive literature survey prior to this study revealed that published studies are limited to a handful only; the results are inconclusive and some are even contradictory. Therefore in order to improve on this situation, following that review, a numerical study was designed and performed as presented in this thesis. For the numerical study, dimensions, end supports, and characteristics of the beam and column models were the same as those chosen in an experimental investigation performed previously where ten beamcolumn joint were tested tofailure. Finite element analysis is a useful tool in cases where analytical methods are not capable of solving the problem due to the complexities associated with the problem. The cyclic behaviour of FRP strengthened reinforced concrete beam-columns joints is such a case. Interaction of steel (longitudinal and stirrups), concrete and FRP, yielding of steel bars and stirrups, cracking of concrete, the redistribution of stresses as some elements unload due to crushing or yielding and the confinement of concrete due to the presence of FRP are some of the issues that introduce the complexities into the problem.Numerical solutions, however, can provide further in formation about the behaviour in lieu of the costly experiments or complex closed form solutions. This thesis presents the results of a numerical study on beam-column joints subjected to cyclic loads that are strengthened with CFRP wraps or strrips in a variety of configurations. The analyses are performed by Abaqus finite element program and are calibrated with the experiments. A range of issues in beam-column joints including the cracking load, the ultimate load, lateral load-displacement curves of joints, are investigated.The numerical results for different configurations of strengthening are compared. Finally, the computed numerical results are compared with those obtained from experiments. the cracking load, the ultimate load, lateral load-displacement curves obtained from numerical analysis for all joints were in very good agreement with the corresponding experimental ones.The results obtained from the numerical analysis in most cases implies that this method is conservative and therefore can be used in design applications with confidence.Keywords: numerical analysis, strengthening, CFRP, reinforced concrete joints
Procedia PDF Downloads 349405 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria
Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene
Abstract:
As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.Keywords: capacity, energy, power system, storage
Procedia PDF Downloads 34404 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions
Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri
Abstract:
Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics
Procedia PDF Downloads 185403 The Survey of Relationship between Health Literacy and Knowledge of Heart Failure with Rehospitalization in Patients with Heart Failure Admitted to Heart Failure Clinic
Authors: Jaleh Mohammad Aliha, Rezvan Razazi, Nasim Naderi
Abstract:
Introduction: Despite the progress in new effective drugs in the treatment of heart failure, the disease still accompanied with frequent hospitalization, impaired quality of life, early mortality and significant economic burden. Patients with chronic disease and consequently patients with heart failure need the knowledge and optimal health literacy to improve the quality of life and minimize the rate of rehopitalizatio. So, considering to importance of knowledge and health literacy in this patients as well as contradictory literature, this study conducted to investigate the relationship between health literacy and Knowledge of heart failure with rehospitalization in patients with heart failure admitted to heart failure clinic in Rajai Heart center in 1394. Methods: The cross-sectional method with convenience sampling method was used in this study. After obtaining the necessary permissions from the ethics committee and the Shahid Rajai Heart center, 238 patients who were older than 18 years and had ejection fraction 35% or less with the ability to read and write and lack of psychiatric, neurological and cognitive disorders and signed the informed consent were recruited. Data collection were perfomed through demographic data questionnaire, short standard health literacy questionnaire 'Short-TOFHLA-16' and Vanderwall (2005) knowledge of heart failure questionnaire. Reliability was assessed by internal consistency method and Cronbach's alpha for both questionnaires was more than 0.7. Then data were analysed by SPSS-20 with descriptive statistic and analytical statistic such as T-test, Chi-square and ANOVA. Results: The majority of patients were male (66%), married (80%) and had age between 50 to 70 years old (42%). The majority of studied men and women have good health literacy and About half of them have adequate knowledge about heart failure. Fisher's exact test showed that there was a significant statistical correlation between health literacy and knowlegh about heart failure. In other words, higher health literacy associated with more knowledge about their condition. Also findings showed that there was no significant statistical correlation between health literacy and knowledge about heart failure and frequency of CCU and emergency admissions. Conclusion: The study results showed that the higher health literacy, associated with the greater knowledge about heart failure and patients' perception about caring recommendations and disease outcomes. Therefore, the knowledge about heart failure and factors which related to severity of the disease, is the important issue to problem identification and treatment and reduction of rehospitalization.Keywords: health literacy, heart failure, knowlegde, rehospitalization
Procedia PDF Downloads 401402 Fermented Fruit and Vegetable Discard as a Source of Feeding Ingredients and Functional Additives
Authors: Jone Ibarruri, Mikel Manso, Marta Cebrián
Abstract:
A high amount of food is lost or discarded in the World every year. In addition, in the last decades, an increasing demand of new alternative and sustainable sources of proteins and other valuable compounds is being observed in the food and feeding sectors and, therefore, the use of food by-products as nutrients for these purposes sounds very interesting from the environmental and economical point of view. However, the direct use of discarded fruit and vegetables that present, in general, a low protein content is not interesting as feeding ingredient except if they are used as a source of fiber for ruminants. Especially in the case of aquaculture, several alternatives to the use of fish meal and other vegetable protein sources have been extensively explored due to the scarcity of fish stocks and the unsustainability of fishing for these purposes. Fish mortality is also of great concern in this sector as this problem highly reduces their economic feasibility. So, the development of new functional and natural ingredients that could reduce the need for vaccination is also of great interest. In this work, several fermentation tests were developed at lab scale using a selected mixture of fruit and vegetable discards from a wholesale market located in the Basque Country to increase their protein content and also to produce some bioactive extracts that could be used as additives in aquaculture. Fruit and vegetable mixtures (60/40 ww) were centrifugated for humidity reduction and crushed to 2-5 mm particle size. Samples were inoculated with a selected Rhizopus oryzae strain and fermented for 7 days in controlled conditions (humidity between 65 and 75% and 28ºC) in Petri plates (120 mm) by triplicate. Obtained results indicated that the final fermented product presented a twofold protein content (from 13 to 28% d.w). Fermented product was further processed to determine their possible functionality as a feed additive. Extraction tests were carried out to obtain an ethanolic extract (60:40 ethanol: water, v.v) and remaining biomass that also could present applications in food or feed sectors. The extract presented a polyphenol content of about 27 mg GAE/gr d.w with antioxidant activity of 8.4 mg TEAC/g d.w. Remining biomass is mainly composed of fiber (51%), protein (24%) and fat (10%). Extracts also presented antibacterial activity according to the results obtained in Agar Diffusion and to the Minimum Inhibitory Concentration (MIC) tests determined against several food and fish pathogen strains. In vitro, digestibility was also assessed to obtain preliminary information about the expected effect of extraction procedure on fermented product digestibility. First results indicated that remaining biomass after extraction doesn´t seem to improve digestibility in comparison to the initial fermented product. These preliminary results show that fermented fruit and vegetables can be a useful source of functional ingredients for aquaculture applications and a substitute of other protein sources in the feeding sector. Further validation will be also carried out through “in vivo” tests with trout and bass.Keywords: fungal solid state fermentation, protein increase, functional extracts, feed ingredients
Procedia PDF Downloads 64401 Status Quo Bias: A Paradigm Shift in Policy Making
Authors: Divyansh Goel, Varun Jain
Abstract:
Classical economics works on the principle that people are rational and analytical in their decision making and their choices fall in line with the most suitable option according to the dominant strategy in a standard game theory model. This model has failed at many occasions in estimating the behavior and dealings of rational people, giving proof of some other underlying heuristics and cognitive biases at work. This paper probes into the study of these factors, which fall under the umbrella of behavioral economics and through their medium explore the solution to a problem which a lot of nations presently face. There has long been a wide disparity in the number of people holding favorable views on organ donation and the actual number of people signing up for the same. This paper, in its entirety, is an attempt to shape the public policy which leads to an increase the number of organ donations that take place and close the gap in the statistics of the people who believe in signing up for organ donation and the ones who actually do. The key assumption here is that in cases of cognitive dissonance, where people have an inconsistency due to conflicting views, people have a tendency to go with the default choice. This tendency is a well-documented cognitive bias known as the status quo bias. The research in this project involves an assay of mandated choice models of organ donation with two case studies. The first of an opt-in system of Germany (where people have to explicitly sign up for organ donation) and the second of an opt-out system of Austria (every citizen at the time of their birth is an organ donor and has to explicitly sign up for refusal). Additionally, there has also been presented a detailed analysis of the experiment performed by Eric J. Johnson and Daniel G. Goldstein. Their research as well as many other independent experiments such as that by Tsvetelina Yordanova of the University of Sofia, both of which yield similar results. The conclusion being that the general population has by and large no rigid stand on organ donation and are gullible to status quo bias, which in turn can determine whether a large majority of people will consent to organ donation or not. Thus, in our paper, we throw light on how governments can use status quo bias to drive positive social change by making policies in which everyone by default is marked an organ donor, which will, in turn, save the lives of people who succumb on organ transplantation waitlists and save the economy countless hours of economic productivity.Keywords: behavioral economics, game theory, organ donation, status quo bias
Procedia PDF Downloads 300400 Learning to Teach in Large Classrooms: Training Faculty Members from Milano Bicocca University, from Didactic Transposition to Communication Skills
Authors: E. Nigris, F. Passalacqua
Abstract:
Relating to the recent researches in the field of faculty development, this paper aims to present a pilot training programme realized at the University of Milano-Bicocca to improve teaching skills of faculty members. A total of 57 professors (both full professors and associate professors) were trained during the pilot programme in three editions of the workshop, focused on promoting skills for teaching large classes. The study takes into account: 1) the theoretical framework of the programme which combines the recent tradition about professional development and the research on in-service training of school teachers; 2) the structure and the content of the training programme, organized in a 12 hours-full immersion workshop and in individual consultations; 3) the educational specificity of the training programme which is based on the relation between 'general didactic' (active learning metholodies; didactic communication) and 'disciplinary didactics' (didactic transposition and reconstruction); 4) results about the impact of the training programme, both related to the workshop and the individual consultations. This study aims to provide insights mainly on two levels of the training program’s impact ('behaviour change' and 'transfer') and for this reason learning outcomes are evaluated by different instruments: a questionnaire filled out by all 57 participants; 12 in-depth interviews; 3 focus groups; conversation transcriptions of workshop activities. Data analysis is based on a descriptive qualitative approach and it is conducted through thematic analysis of the transcripts using analytical categories derived principally from the didactic transposition theory. The results show that the training programme developed effectively three major skills regarding different stages of the 'didactic transposition' process: a) the content selection; a more accurated selection and reduction of the 'scholarly knowledge', conforming to the first stage of the didactic transposition process; b) the consideration of students’ prior knowledge and misconceptions within the lesson design, in order to connect effectively the 'scholarly knowledge' to the 'knowledge to be taught' (second stage of the didactic transposition process); c) the way of asking questions and managing discussion in large classrooms, in line with the transformation of the 'knowledge to be taught' in 'taught knowledge' (third stage of the didactic transposition process).Keywords: didactic communication, didactic transposition, instructional development, teaching large classroom
Procedia PDF Downloads 138399 Quantitative Comparisons of Different Approaches for Rotor Identification
Authors: Elizabeth M. Annoni, Elena G. Tolkacheva
Abstract:
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors
Procedia PDF Downloads 324398 Geotechnical Education in the USA: A Comparative Analysis of Academic Schooling vs. Industry Needs in the Area of Earth Retaining Structures
Authors: Anne Lemnitzer, Eric Tavarez
Abstract:
The academic rigor of the geotechnical engineering curriculum indicates strong institutional and geographical variations. Geotechnical engineering deals with the most challenging civil engineering material, as opposed to structural engineering, environmental studies, transportation engineering, and water resources. Yet, technical expectations posed by the practicing professional community do not necessarily consider the challenges inherent to the disparity in academic rigor and disciplinary differences. To recognize the skill shortages among current graduates as well as identify opportunities to better equip graduate students in specific fields of geotechnical engineering, a two-part survey was developed in collaboration with the Earth Retaining Structures (ERS) Committee of the American Society of Civil Engineers. Earth Retaining Structures are critical components of infrastructure systems and integral components to many major engineering projects. Within the geotechnical curriculum, Earth Retaining Structures is either taught as a separate course or major subject within a foundation design class. Part 1 of the survey investigated the breadth and depth of the curriculum with respect to ERS by requesting faculty across the United States to provide data on their curricular content, integration of practice-oriented course content, student preparation for professional licensing, and level of technical competency expected upon student graduation. Part 2 of the survey enables a comparison of training provided versus training needed. This second survey addressed practicing geotechnical engineers in all sectors of the profession (e.g., private engineering consulting, governmental agencies, contractors, suppliers/manufacturers) and collected data on the expectations with respect to technical and non-technical skills of engineering graduates entering the professional workforce. Results identified skill shortages in soft skills, critical thinking, analytical and language skills, familiarity with design codes and standards, and communication with various stakeholders. The data will be used to develop educational tools to advance the proficiency and expertise of geotechnical engineering students to meet and exceed the expectations of the profession and to stimulate a lifelong interest in advancing the field of geotechnical engineering.Keywords: geotechnical engineering, academic training, industry requirements, earth retaining structures
Procedia PDF Downloads 127397 Q-Efficient Solutions of Vector Optimization via Algebraic Concepts
Authors: Elham Kiyani
Abstract:
In this paper, we first introduce the concept of Q-efficient solutions in a real linear space not necessarily endowed with a topology, where Q is some nonempty (not necessarily convex) set. We also used the scalarization technique including the Gerstewitz function generated by a nonconvex set to characterize these Q-efficient solutions. The algebraic concepts of interior and closure are useful to study optimization problems without topology. Studying nonconvex vector optimization is valuable since topological interior is equal to algebraic interior for a convex cone. So, we use the algebraic concepts of interior and closure to define Q-weak efficient solutions and Q-Henig proper efficient solutions of set-valued optimization problems, where Q is not a convex cone. Optimization problems with set-valued maps have a wide range of applications, so it is expected that there will be a useful analytical tool in optimization theory for set-valued maps. These kind of optimization problems are closely related to stochastic programming, control theory, and economic theory. The paper focus on nonconvex problems, the results are obtained by assuming generalized non-convexity assumptions on the data of the problem. In convex problems, main mathematical tools are convex separation theorems, alternative theorems, and algebraic counterparts of some usual topological concepts, while in nonconvex problems, we need a nonconvex separation function. Thus, we consider the Gerstewitz function generated by a general set in a real linear space and re-examine its properties in the more general setting. A useful approach for solving a vector problem is to reduce it to a scalar problem. In general, scalarization means the replacement of a vector optimization problem by a suitable scalar problem which tends to be an optimization problem with a real valued objective function. The Gerstewitz function is well known and widely used in optimization as the basis of the scalarization. The essential properties of the Gerstewitz function, which are well known in the topological framework, are studied by using algebraic counterparts rather than the topological concepts of interior and closure. Therefore, properties of the Gerstewitz function, when it takes values just in a real linear space are studied, and we use it to characterize Q-efficient solutions of vector problems whose image space is not endowed with any particular topology. Therefore, we deal with a constrained vector optimization problem in a real linear space without assuming any topology, and also Q-weak efficient and Q-proper efficient solutions in the senses of Henig are defined. Moreover, by means of the Gerstewitz function, we provide some necessary and sufficient optimality conditions for set-valued vector optimization problems.Keywords: algebraic interior, Gerstewitz function, vector closure, vector optimization
Procedia PDF Downloads 216396 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 127395 Evaluating the Factors That Influence Caries Reduction During Pregnancy
Authors: Mimoza Canga, Irene Malagnino, Vergjini Mulo, Alketa Qafmolla, Vito Antonio Malagnino
Abstract:
Background: Dental caries is the most common dental disease and pregnancy represents a special process of physical, hormonal and metabolic changes in pregnant women, which is accompanied by an imbalance in the oral cavity. Objective: The objective of this study is to evaluate caries reduction after dental visits, the scaling of teeth, fluoridated water, brushing of the teeth and using fluoride toothpaste before and during pregnancy. Materials and methods: This study was conducted in the time period March 2018- September 2021, the age range of the participants was: 18-41 years old. The sample taken under observation was composed of 84 pregnant women. The questionnaire included the demographic characteristics of the sample, such as age, women's education level was primary, secondary, and higher education. Based on women's education level, our analysis found that 25.9% of pregnant women had completed primary education, 35.2% of them had secondary education and 38.9% of pregnant women had higher education. The descriptive and analytical research analysis is formulated as a longitudinal study. Statistical analysis was performed using IBM SPSS Statistics 23.0. The significance level (α) was set at 0.05, whereas P-value and analysis of variance (ANOVA) were used to analyze the data. Results: In the present study, it was observed that there is a strong relationship between dental visits and the scaling of the teeth with the value of P˂ .0001. While the number of teeth with caries before pregnancy and fluoridated water have a P-value=0.002. If we compare the same factor with the number of teeth with dental caries during pregnancy, the correlation is P-value = 0.0001. The number of teeth with caries before pregnancy and carbohydrates consumption has a strong relation with P-value=0.05. According to the present research, the number of teeth with dental caries before pregnancy in relation to brushing the teeth has a P-value ˂ 0.05. Furthermore, in the actual research, it was established that using fluoride toothpaste doesn’t affect the number of teeth with caries before pregnancy with a P-value= .314. Conclusion: According to the results of the present study performed in Albania, it was found out that the periodical dental visits, scaling of the teeth, fluoridated water, brushing of the teeth influenced caries reduction before and during pregnancy. In comparison, the usage of fluoride toothpaste did not have any effect on dental caries reduction in the same time period. The recommendations are as follows: maintaining oral hygiene, using fluoridated water and brushing the teeth regularly. Healthcare providers should inform pregnant women about the importance of oral health and the implementation of measures to manage dental caries.Keywords: brushing of the teeth, dental visits, dental scaling, fluoridated water, pregnancy
Procedia PDF Downloads 194394 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 113393 Investigation of Subsurface Structures within Bosso Local Government for Groundwater Exploration Using Magnetic and Resistivity Data
Authors: Adetona Abbassa, Aliyu Shakirat B.
Abstract:
The study area is part of Bosso local Government, enclosed within Longitude 6.25’ to 6.31’ and Latitude 9.35’ to 9.45’, an area of 16x8 km², within the basement region of central Nigeria. The region is a host to Nigerian Airforce base 12 (NAF 12quick response) and its staff quarters, the headquarters of Bosso local government, the Independent National Electoral Commission’s two offices, four government secondary schools, six primary schools and Minna international airport. The area suffers an acute shortage of water from November when rains stop to June when rains commence within North Central Nigeria. A way of addressing this problem is a reconnaissance method to delineate possible fractures and fault lines that exists within the region by sampling the Aeromagnetic data and using an appropriate analytical algorithm to delineate these fractures. This is followed by an appropriate ground truthing method that will confirm if the fracture is connected to underground water movement. The first vertical derivative for structural analysis, reveals a set of lineaments labeled AA’, BB’, CC’, DD’, EE’ and FF’ all trending in the Northeast – Southwest directions. AA’ is just below latitude 9.45’ above Maikunkele village, cutting off the upper part of the field, it runs through Kangwo, Nini, Lawo and other communities. BB’ is at Latitude 9.43’ it truncated at about 2Km before Maikunkele and Kuyi. CC’ is around 9.40’ sitting below Maikunkele runs down through Nanaum. DD’ is from Latitude 9.38’; interestingly no community within this region where the fault passes through. A result from the three sites where Vertical Electrical Sounding was carried out reveals three layers comprised of topsoil, intermediate Clay formation and weathered/fractured or fresh basement. The depth to basement map was also produced, depth to the basement from the ground surface with VES A₂, B5, D₂ and E₁ to be relatively deeper with depth values range between 25 to 35 m while the shallower region of the area has a depth range value between 10 to 20 m. Hence, VES A₂, A₅, B₄, B₅, C₂, C₄, D₄, D₅, E₁, E₃, and F₄ are high conductivity zone that are prolific for groundwater potential. The depth range of the aquifer potential zones is between 22.7 m to 50.4 m. The result from site C is quite unique though the 3 layers were detected in the majority of the VES points, the maximum depth to the basement in 90% of the VES points is below 8 km, only three VES points shows considerably viability, which are C₆, E₂ and F₂ with depths of 35.2 m and 38 m respectively but lack of connectivity will be a big challenge of chargeability.Keywords: lithology, aeromagnetic, aquifer, geoelectric, iso-resistivity, basement, vertical electrical sounding(VES)
Procedia PDF Downloads 139392 Exchanges between Literature and Cinema: Scripted Writing in the Novel "Miguel e os Demônios", by Lourenço Mutarelli
Authors: Marilia Correa Parecis De Oliveira
Abstract:
This research looks at the novel Miguel e os demônios (2009), by the contemporary Brazilian author Lourenço Mutarelli. In it, the presence of film language resources is remarkable, creating thus a kind of scripted writing. We intend to analyze the presence of film language in work under study, in which there is a mixture of the characteristics of the novel and screenplay genres, trying to explore which aesthetic and meaning effects of the ownership of a visual language for the creation of a literary text create in the novel. The objective of this research is to identify and analyze the formal and thematic aspects that characterize the hybridity of literature and film in the novel by Lourenço Mutarelli. The method employed comprises reading and production cataloging of theoretical and critical texts, literary and film theory, historical review about the author, and also the realization of an analytical and interpretative reading of novel. In Miguel e os demônios there is a range of formal and thematic elements of popular narrative genres such as the detective story and action film, with a predominance of verb forms in the present and NPs - features that tend to make present the narrated scenes, as in the cinema. The novel, in this sense, is located in an intermediate position between the literary text and the pre-film text, as though filled with proper elements of the language of film, you can not fit it categorically in the genre script, since it does not reduce the script because aspires to be read as a novel. Therefore, the difficulty of fitting the work in a single gender also refused to be extra-textual factors - such as your publication as novel - but, rather, by the binary classifications serve solely to imprison the work on a label, which impoverish not only reading the text, as also the possibility of recognizing literature as a constant dialogue space and interaction with other media. We can say, therefore, that frame the work Miguel e os demônios in one of the two genres (novel or screenplay) proves not enough, since the text is revealed a hybrid narrative, consisting in a kind of scripted writing. In this sense, it is like a text that is born in a society saturated by audiovisual in their daily lives in order to be consumed by readers who, in ascending scale, exchange books by visual narratives. However, the novel uses film's resources without giving up its constitution as literature; on the contrary, it enriches the visual and linguistically, dialoguing with the complex contemporary horizon marked by the cultural industry.Keywords: Brazilian literature, cinema, Lourenço Mutarelli, screenplay
Procedia PDF Downloads 311391 The Role of Nickel on the High-Temperature Corrosion of Modell Alloys (Stainless Steels) before and after Breakaway Corrosion at 600°C: A Microstructural Investigation
Authors: Imran Hanif, Amanda Persdotter, Sedigheh Bigdeli, Jesper Liske, Torbjorn Jonsson
Abstract:
Renewable fuels such as biomass/waste for power production is an attractive alternative to fossil fuels in order to achieve a CO₂ -neutral power generation. However, the combustion results in the release of corrosive species. This puts high demands on the corrosion resistance of the alloys used in the boiler. Stainless steels containing nickel and/or nickel containing coatings are regarded as suitable corrosion resistance material especially in the superheater regions. However, the corrosive environment in the boiler caused by the presence of water vapour and reactive alkali very rapidly breaks down the primary protection, i.e., the Cr-rich oxide scale formed on stainless steels. The lifetime of the components, therefore, relies on the properties of the oxide scale formed after breakaway, i.e., the secondary protection. The aim of the current study is to investigate the role of varying nickel content (0–82%) on the high-temperature corrosion of model alloys with 18% Cr (Fe in balance) in the laboratory mimicking industrial conditions at 600°C. The influence of nickel is investigated on both the primary protection and especially the secondary protection, i.e., the scale formed after breakaway, during the oxidation/corrosion process in the dry O₂ (primary protection) and more aggressive environment such as H₂O, K₂CO₃ and KCl (secondary protection). All investigated alloys experience a very rapid loss of the primary protection, i.e., the Cr-rich (Cr, Fe)₂O₃, and the formation of secondary protection in the aggressive environments. The microstructural investigation showed that secondary protection of all alloys has a very similar microstructure in all more aggressive environments consisting of an outward growing iron oxide and inward growing spinel-oxide (Fe, Cr, Ni)₃O₄. The oxidation kinetics revealed that it is possible to influence the protectiveness of the scale formed after breakaway (secondary protection) through the amount of nickel in the alloy. The difference in oxidation kinetics of the secondary protection is linked to the microstructure and chemical composition of the complex spinel-oxide. The detailed microstructural investigations were carried out using the extensive analytical techniques such as electron back scattered diffraction (EBSD), energy dispersive X-rays spectroscopy (EDS) via the scanning and transmission electron microscopy techniques and results are compared with the thermodynamic calculations using the Thermo-Calc software.Keywords: breakaway corrosion, EBSD, high-temperature oxidation, SEM, TEM
Procedia PDF Downloads 142390 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception
Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova
Abstract:
The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.Keywords: bistability, brain, noise, perception, stochastic processes
Procedia PDF Downloads 445389 Patterns of Eosinophilia in Cardiac Patients and its Association with Endomyocardial Disease Presenting to Tertiary Care Hospital in Peshawar
Authors: Rashid Azeem
Abstract:
Introduction: Eosinophilia, which can be categorized as mild, moderate, and severe form on the basis of increasing eosinophil counts, might be responsible for a wide range of cardiac manifestations, varying from a simple myocarditis to a severe state like endomyocardial fibrosis. Eosinophils are involved in the pathogenesis of a variety of cardiovascular disorder like Loffler endocarditis, eosinophilic granulomatosis with polyangitis (EGPH), and hyper eosinophilic (HES). Among them HES carries and incidence rate b/w 48% and 75% and is the main causes of cardiac motility and mobility due to eosinophilia involvement. Aims and objectives: The aim of this study is to determine the frequency of eosinophilia in cardiac patients and to ascertain the evidence of endomyocardial diseases in eosinophilic patients in a cardiology institution Material and Methods: This cross sectional analytical study was conducted in hematology Department of Peshawar institute of Cardiology after approval from hospital ethical and research committee. All 70 patients were subjected to detailed history and clinical examination. Investigation like CBC, Chest X-ray, ECG, Echo, Angiography findings were used to monitor patient’s clinical status. Data is analyzed using SPSS version 25 and MS Excel. Results: Out of 70 patients in our study, a total of 66 patients(94 %) shows evidence of cardiac manifestations. In our study, we have observed a number of abnormal ECG patterns in cardiac patients presenting with eosinophilia, like T wave changes, loss of R wave, sinus bradycardia with LVH strain, and ST wave abnormality. abnormal echocardiographic findings were observed in our patients, like valvular abnormalities (in 45.7%), RWMA abnormalities (in 2.8%), isolated ventricular dysfunction (in 21.4%), and in 10% patients, normal echocardiography. We further noted abnormal coronary angiography findings in cardiac patients with eosinophilia ranging from single vessel to multi vessel occlusions. Conclusions: Eosinophils are involved in the pathogenesis of a variety of cardiovascular disorders which can be detected by various diagnostic means, and the severity of the disease increases with time and with increasing eosinophil count ranging from simple myocarditis to a fatal condition like endomyocardial fibrosis. Thus, increased eosinophilic count as a laboratory parameter in cardiac patients may be a sign of endomyocardial damage which will further help cardiologist to intervene more aggressively then routine approach to a cardiac patient.Keywords: eosinophilia, endomyocardial fibrosis, cardiac, hypereosinophilic syndrome
Procedia PDF Downloads 65388 Synthesis of High-Antifouling Ultrafiltration Polysulfone Membranes Incorporating Low Concentrations of Graphene Oxide
Authors: Abdulqader Alkhouzaam, Hazim Qiblawey, Majeda Khraisheh
Abstract:
Membrane treatment for desalination and wastewater treatment is one of the promising solutions to affordable clean water. It is a developing technology throughout the world and considered as the most effective and economical method available. However, the limitations of membranes’ mechanical and chemical properties restrict their industrial applications. Hence, developing novel membranes was the focus of most studies in the water treatment and desalination sector to find new materials that can improve the separation efficiency while reducing membrane fouling, which is the most important challenge in this field. Graphene oxide (GO) is one of the materials that have been recently investigated in the membrane water treatment sector. In this work, ultrafiltration polysulfone (PSF) membranes with high antifouling properties were synthesized by incorporating different loadings of GO. High-oxidation degree GO had been synthesized using a modified Hummers' method. The synthesized GO was characterized using different analytical techniques including elemental analysis, Fourier transform infrared spectroscopy - universal attenuated total reflectance sensor (FTIR-UATR), Raman spectroscopy, and CHNSO elemental analysis. CHNSO analysis showed a high oxidation degree of GO represented by its oxygen content (50 wt.%). Then, ultrafiltration PSF membranes incorporating GO were fabricated using the phase inversion technique. The prepared membranes were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and showed a clear effect of GO on PSF physical structure and morphology. The water contact angle of the membranes was measured and showed better hydrophilicity of GO membranes compared to pure PSF caused by the hydrophilic nature of GO. Separation properties of the prepared membranes were investigated using a cross-flow membrane system. Antifouling properties were studied using bovine serum albumin (BSA) and humic acid (HA) as model foulants. It has been found that GO-based membranes exhibit higher antifouling properties compared to pure PSF. When using BSA, the flux recovery ratio (FRR %) increased from 65.4 ± 0.9 % for pure PSF to 84.0 ± 1.0 % with a loading of 0.05 wt.% GO in PSF. When using HA as model foulant, FRR increased from 87.8 ± 0.6 % to 93.1 ± 1.1 % with 0.02 wt.% of GO in PSF. The pure water permeability (PWP) decreased with loadings of GO from 181.7 L.m⁻².h⁻¹.bar⁻¹ of pure PSF to 181.1, and 157.6 L.m⁻².h⁻¹.bar⁻¹ with 0.02 and 0.05 wt.% GO respectively. It can be concluded from the obtained results that incorporating low loading of GO could enhance the antifouling properties of PSF hence improving its lifetime and reuse.Keywords: antifouling properties, GO based membranes, hydrophilicity, polysulfone, ultrafiltration
Procedia PDF Downloads 143387 Fighting the Crisis with 4.0 Competences: Higher Education Projects in the Times of Pandemic
Authors: Jadwiga Fila, Mateusz Jezowski, Pawel Poszytek
Abstract:
The outbreak of the global COVID-19 pandemic started the times of crisis full of uncertainty, especially in the field of transnational cooperation projects based on the international mobility of their participants. This is notably the case of Erasmus+ Program for higher education, which is the flagship European initiative boosting cooperation between educational institutions, businesses, and other actors, enabling students and staff mobility, as well as strategic partnerships between different parties. The aim of this abstract is to study whether competences 4.0 are able to empower Erasmus+ project leaders in sustaining their international cooperation in times of global crisis, widespread online learning, and common project disruption or cancellation. The concept of competences 4.0 emerged from the notion of the industry 4.0, and it relates to skills that are fundamental for the current labor market. For the aim of the study presented in this abstract, four main 4.0 competences were distinguished: digital, managerial, social, and cognitive competence. The hypothesis for the study stipulated that the above-mentioned highly-developed competences may act as a protective shield against the pandemic challenges in terms of projects’ sustainability and continuation. The objective of the research was to assess to what extent individual competences are useful in managing projects in times of crisis. For this purpose, the study was conducted, involving, among others, 141 Polish higher education project leaders who were running their cooperation projects during the peak of the COVID-19 pandemic (Mar-Nov 2020). The research explored the self-perception of the above-mentioned competences among Erasmus+ project leaders and the contextual data regarding the sustainability of the projects. The quantitative character of data permitted validation of scales (Cronbach’s Alfa measure), and the use of factor analysis made it possible to create a distinctive variable for each competence and its dimensions. Finally, logistic regression was used to examine the association of competences and other factors on project status. The study shows that the project leaders’ competence profile attributed the highest score to digital competence (4.36 on the 1-5 scale). Slightly lower values were obtained for cognitive competence (3.96) and managerial competence (3.82). The lowest score was accorded to one specific dimension of social competence: adaptability and ability to manage stress (1.74), which proves that the pandemic was a real challenge which had to be faced by project coordinators. For higher education projects, 10% were suspended or prolonged because of the COVID-19 pandemic, whereas 90% were undisrupted (continued or already successfully finished). The quantitative analysis showed a positive relationship between the leaders’ levels of competences and the projects status. In the case of all competences, the scores were higher for project leaders who finished projects successfully than for leaders who suspended or prolonged their projects. The research demonstrated that, in the demanding times of the COVID-19 pandemic, competences 4.0, to a certain extent, do play a significant role in the successful management of Erasmus+ projects. The implementation and sustainability of international educational projects, despite mobility and sanitary obstacles, depended, among other factors, on the level of leaders’ competences.Keywords: Competences 4.0, COVID-19 pandemic, Erasmus+ Program, international education, project sustainability
Procedia PDF Downloads 94386 Developing an Innovative General Foundation Programme (GFP) and an IELTS Centre in a New Military College
Authors: Jessica Peart, Sarim Al Zubaidy
Abstract:
This paper examines the main dialogic and reformative aspects that have constituted the developing implementation of an English language module in a common pre-sessional program in Oman, the General Foundation Program (GFP), at the new Military Technological College (MTC), in Oman’s capital, muscat. The MTC is the first of its kind in the country to merge military with academic training and has been running programs since September 2013 over five trimesters to date, receiving external validation and accreditation from the University of Portsmouth (UoP), UK. From this starting point, We will provide context on the parameters that necessitated delivery of this common but specially tailored pre-sessional program at the MTC and outline in detail how the English module with integrated key study skills and personal tutoring support was initially conceived before operations commenced and cooperation between all stakeholders took practical shape. This enquiry traces how stakeholders from students to faculty, college boards and collaborative university partners have considered and redefined the in part static and dynamic boundaries of their larger and smaller scale stakes. With regard to the widely held recognition that pre-sessional students require training in transferable study skills in order to succeed at university, we will chart the subsequent and ongoing adjustments made to the generic, pastoral and integrated elements of that program. Driving this concerted effort has been at base the need for a GFP concerned with three criteria for incoming MTC students cadets, namely to develop candidate’s rounded capacity for intellectual, technical and physical skill as both students and cadets, to generate linguistic proficiency and discerning use of appropriate language registers and to allow personal and collective time for adjustment to a multilayered, brand new environment, while also working within a regulated timeline for academic progression to the MTC diploma or degree levels. The English Department teaching staff’s facilitation of the initial program’s methodologies and timeframe for the GFP English module has garnered a keen and diverse sense of the holistic student cadet experience, which a range of alterations to the program demonstrate. These include alterations to the class types and overall program duration as well as greater multiplicity of exposure within learning environments. In surveying the impact of these composite maneuvers and challenges within a proactive and evolving context of teaching and learning, it is finally demonstrated how student cadet levels of productivity and self-reliance on the one hand and retention issues on the other are being gainfully steered towards progression within a framework for inclusive reciprocal dialogue, gathering thereby civilian and military backgrounds toward uniquely united ends.Keywords: English module transferable skills, faculty dialogue, governance structure, overarching regulatory agencies
Procedia PDF Downloads 276