Search results for: critical challenges
989 Cognition in Crisis: Unravelling the Link Between COVID-19 and Cognitive-Linguistic Impairments
Authors: Celine Davis
Abstract:
The novel coronavirus 2019 (COVID-19) is an infectious disease caused by the virus SARS-CoV-2, which has detrimental respiratory, cardiovascular, and neurological effects impacting over one million lives in the United States. New researches has emerged indicating long-term neurologic consequences in those who survive COVID-19 infections, including more than seven million Americans and another 27 million people worldwide. These consequences include attentional deficits, memory impairments, executive function deficits and aphasia-like symptoms which fall within the purview of speech-language pathology. The National Health Interview Survey (NHIS) is a comprehensive annual survey conducted by the National Center for Health Statistics (NCHS), a branch of the Centers for Disease Control and Prevention (CDC) in the United States. The NHIS is one of the most significant sources of health-related data in the country and has been conducted since 1957. The longitudinal nature of the study allows for analysis of trends in various variables over the years, which can be essential for understanding societal changes and making treatment recommendations. This current study will utilize NHIS data from 2020-2022 which contained interview questions specifically related to COVID-19. Adult cases of individuals between the ages of 18-50 diagnosed with COVID-19 in the United States during 2020-2022 will be identified using the National Health Interview Survey (NHIS). Multiple regression analysis of self-reported data confirming COVID-19 infection status and challenges with concentration, communication, and memory will be performed. Latent class analysis will be utilized to identify subgroups in the population to indicate whether certain demographic groups have higher susceptibility to cognitive-linguistic deficits associated with COVID-19. Completion of this study will reveal whether there is an association between confirmed COVID-19 diagnosis and heightened incidence of cognitive deficits and subsequent implications, if any, on activities of daily living. This study is distinct in its aim to utilize national survey data to explore the relationship between confirmed COVID-19 diagnosis and the prevalence of cognitive-communication deficits with a secondary focus on resulting activity limitations. To the best of the author’s knowledge, this will be the first large-scale epidemiological study investigating the associations between cognitive-linguistic deficits, COVID-19 and implications on activities of daily living in the United States population. These findings will highlight the need for targeted interventions and support services to address the cognitive-communication needs of individuals recovering from COVID-19, thereby enhancing their overall well-being and functional outcomes.Keywords: cognition, COVID-19, language, limitations, memory, NHIS
Procedia PDF Downloads 50988 A Cross-Cultural Validation of the Simple Measure of Impact of Lupus Erythematosus in Youngsters (Smiley) among Filipino Pediatric Lupus Patients
Authors: Jemely M. Punzalan, Christine B. Bernal, Beatrice B. Canonigo, Maria Rosario F. Cabansag, Dennis S. Flores, Paul Joseph T. Galutira, Remedios D. Chan
Abstract:
Background: Systemic lupus erythematosus (SLE) is one of the most common autoimmune disorders predominates in women of childbearing age. Simple Measure of Impact of Lupus Erythematosus in Youngsters (SMILEY) is the only health specific quality of life tool for pediatric SLE, which has been translated to different languages except in Filipino. Objective: The primary objective of this study was to develop a Filipino translation of the SMILEY and to examine the validity and reliability of this translation. Methodology: The SMILEY was translated into Filipino by a bilingual individual and back-translated by another bilingual individual blinded from the original English version. The translation was evaluated for content validity by a panel of experts and subjected to pilot testing. The pilot-tested translation was used in the validity and reliability testing proper. The SMILEY, together with the previously validated PEDSQL 4.0 Generic Core Scale was administered to lupus pediatric patients and their parent at two separate occasions: a baseline and a re-test seven to fourteen days apart. Tests for convergent validity, internal consistency, and test-retest reliability were performed. Results: A total of fifty children and their parent were recruited. The mean age was 15.38±2.62 years (range 8-18 years), mean education at high school level. The mean duration of SLE was 28 months (range 1-81 months). Subjects found the questionnaires to be relevant, easy to understand and answer. The validity of the SMILEY was demonstrated in terms of content validity, convergent validity, internal consistency, and test-retest reliability. Age, socioeconomic status and educational attainment did not show a significant effect on the scores. The difference between scores of child and parent report was showed to be significant with SMILEY total (p=0.0214), effect on social life (p=0.0000), and PEDSQL physical function (p=0.0460). Child reports showed higher scores for the following domains compared to their parent. Conclusion: SMILEY is a brief, easy to understand, valid and reliable tool for assessing pediatric SLE specific HRQOL. It will be useful in providing better care, understanding and may offer critical information regarding the effect of SLE in the quality of life of our pediatric lupus patients. It will help physician understands the needs of their patient not only on treatment of the specific disease but as well as the impact of the treatment on their daily lives.Keywords: systemic lupus erythematosus, pediatrics, quality of life, Simple Measure of Impact of Lupus Erythematosus in Youngsters (SMILEY)
Procedia PDF Downloads 442987 Cross Carpeting in Nigerian Politics: Some Legal and Moral Issues Generated
Authors: Agbana Olaseinde Julius, Opadere Olaolu Stephen
Abstract:
The concept of cross carpeting is as old as politics itself. Basically, it entails an individual leaving a political party/group, to join another. The reasons for which cross carpeting is embarked upon are diverse: ideological differences; ethnic and/or religious differences; access to actual or perceived better political opportunities; liberty of association; rancor; etc. The current democratic dispensation in Nigeria has experienced renewed and rather alarming rate of cross carpeting, for reasons including those enumerated above and others. Right to cross carpet is inherent in a democratic setting as well as the political stakeholder; so does it also comprise of the constitutional right of ‘freedom of association’. However, the current species of cross carpeting in Nigeria requires scrutiny, in view of some potential legal and moral challenges it poses for both the present and the future. Cross carpeting is considered both legal and constitutional, but the current spate raises the question of expediency, particularly in a nascent democracy. It is considered to have a propensity of negatively impacting political stability in a polity with fragile nerves. Importantly too, cross carpeting is considered a potential damage to the psyche of posterity with regards to a warped disposition to promises, honour and integrity. The perceived peculiar dimension of cross carpeting in Nigeria raises questions on the quality of leadership presently obtainable in the country, vis-à-vis greed, self-centeredness, disregard for the concern and interest of avowed followers/fans, entrenchment of distrust, etc. Thus, the study made use of primary and secondary sources of information. The primary sources included the Constitutions of the Federal Republic of Nigeria 1999 (as amended); judicial decisions; and the Electoral Act, 2010 (as Amended). The secondary sources comprised of information from books, journals, newspapers, magazines and Internet documents. Data obtained from these sources were subjected to content analysis. Findings of this study show that though the act of cross carpeting may not be in breach of any Statute or Law, it however, in most cases, breaches the morals of expediency. The morality thereof is far from justifiable, and should be condemned in the interest of the present and posterity. There is a great and urgent need to embark on a re-entrenchment of the culture of political ideology in the Nigerian polity, as obtainable in developed democracies. In conclusion, the need to exercise the right of cross carpeting with caution cannot be overemphasized. Membership of a political group/party should be backed by commitment to well defined ideologies and values. Commitment to them should be regarded akin to that found in the family, which is not easily or flippantly jettisoned.Keywords: cross-carpeting, Nigeria, legal, moral issues, politics
Procedia PDF Downloads 446986 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials
Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova
Abstract:
Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system
Procedia PDF Downloads 406985 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 241984 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents
Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat
Abstract:
This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents
Procedia PDF Downloads 68983 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students
Authors: Stephanie Jo Marchese
Abstract:
This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods
Procedia PDF Downloads 288982 Processing, Nutritional Assessment and Sensory Evaluation of Bakery Products Prepared from Orange Fleshed Sweet Potatoes (OFSP) and Wheat Composite Flours
Authors: Hategekimana Jean Paul, Irakoze Josiane, Ishimweyizerwe Valentin, Iradukunda Dieudonne, Uwanyirigira Jeannette
Abstract:
Orange fleshed sweet potatoes (OFSP) are highly grown and are available plenty in rural and urban local markets and its contribution in reduction of food insecurity in Rwanda is considerable. But the postharvest loss of this commodity is a critical challenge due to its high perishability. Several research activities have been conducted on how fresh food commodities can be transformed into extended shelf life food products for prevention of post-harvest losses. However, such activity was not yet well studied in Rwanda. The aim of the present study was the processing of backed products from (OFSP)combined with wheat composite flour and assess the nutritional content and consumer acceptability of new developed products. The perishability of OFSP and their related lack during off season can be eradicated by producing cake, doughnut and bread with OFSP puree or flour. The processing for doughnut and bread were made by making OFSP puree and other ingredients then a dough was made followed by frying and baking while for cake OFSP was dried through solar dryer to have a flour together with wheat flour and other ingredients to make dough cake and baking. For each product, one control and three experimental samples, (three products in three different ratios (30,40 and50%) of OFSP and the remaining percentage of wheat flour) were prepared. All samples including the control were analyzed for the consumer acceptability (sensory attributes). Most preferred samples (One sample for each product with its control sample and for each OFSP variety) were analyzed for nutritional composition along with control sample. The Cake from Terimbere variety and Bread from Gihingumukungu supplemented with 50% OFSP flour or Puree respectively were most acceptable except Doughnut from Vita variety which was highly accepted at 50% of OFSP supplementation. The moisture, ash, protein, fat, fiber, Total carbohydrate, Vitamin C, reducing sugar and minerals (Sodium, Potassium and Phosphorus.) content was different among products. Cake was rich in fibers (14.71%), protein (6.590%), and vitamin c(19.988mg/100g) compared to other samples while bread found to be rich in reducing sugar with 12.71mg/100g compared to cake and doughnut. Also doughnut was found to be rich in fat content with 6.89% compared to other samples. For sensory analysis, doughnut was highly accepted in ratio of 60:40 compared to other products while cake was least accepted at ratio of 50:50. The Proximate composition and minerals content of all the OFSP products were significantly higher as compared to the control samples.Keywords: post-harvest loss, OFSP products, wheat flour, sensory evaluation, proximate composition
Procedia PDF Downloads 60981 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 133980 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 285979 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser
Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett
Abstract:
Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser
Procedia PDF Downloads 155978 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 218977 Implementation of the Circular Economy Concept in Greenhouse Production Systems: Microalgae and Biostimulant Production Using Soilless Crops’ Drainage Nutrient Solution
Authors: Nikolaos Katsoulas, Sofia Faliagka, George Kountrias, Eleni Dimitriou, Eleftheria Pechlivani
Abstract:
The challenges to feed the world in 2050 are becoming more and more apparent. This calls for producing more with fewer inputs (most of them under scarcity), higher resource efficiency, minimum or zero effect on the environment, and higher sustainability. Therefore, increasing the circularity of production systems is highly significant for their sustainability. Protected horticulture offers opportunities for maximum resource efficiency across various levels within and between farms and at the regional level), high-quality production, and contributes significantly to the nutrition security as part of the world food production. In greenhouses, closed soilless cultivation systems give the opportunity to increase the water and nutrient use efficiency and reduce the environmental impact of the cultivation system by the reuse of the drained water and nutrients. However, due to the low quality of the water used in the Mediterranean countries, a completely closed system is not feasible. Partial discharge of the drainage nutrient solution when the levels of electrical conductivity (EC) or of the toxic ions in the system are reached is still a necessity. Thus, in the frame of the circular economy concept, this work presents the utilisation of the drainage solution of soilless cultivation systems for microalgae and biofertilisers production. The system includes a greenhouse equipped with a soilless cultivation system, a drainage solution collection tank, a closed bioreactor for microalgae production, and a biocatalysis tank. The bioreactor tested in the frame of this work includes two closed tube loops of a capacity of 1000 L each where, after the initial inoculation, the microalgae is developed using as a growth medium the drainage solution collected from the greenhouse crops. The bioreactor includes light and temperature control while pH is still manually regulated. As soon as the microalgae culture reaches a certain density level, 20% of the culture is harvested, and the culture system is refiled by a drainage nutrient solution. The microalgae produced goes through a biocatalysis process, which leads to the production of a rich aminoacids (and nitrogen) biofertiliser. The produced biofertiliser is then used for the fertilisation of greenhouse crops. The complete production cycle along with the effects of the biofertiliser produced on crop growth and yield are presented and discussed in this manuscript. Acknowledgment: This work was carried out under the PestNu project that has received funding from the European Union’s Horizon 2020 research and innovation programme under the Green Deal grant agreement No. 101037128 — PestNu.Keywords: soilless, water use efficiency, nutrients use efficiency, biostimulant
Procedia PDF Downloads 86976 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 133975 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 75974 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic
Authors: Hanan Al-Jabri
Abstract:
Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting
Procedia PDF Downloads 159973 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation
Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang
Abstract:
The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics
Procedia PDF Downloads 132972 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom
Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou
Abstract:
The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.Keywords: microbial community, harmful algal bloom, ecological process, network
Procedia PDF Downloads 113971 Harnessing Clinical Trial Capacity to Mitigate Zoonotic Diseases: The Role of Expert Scientists in Ethiopia
Authors: Senait Belay Adugna, Mirutse Giday, Tsegahun Manyazewal
Abstract:
Background: The emergence and resurgence of zoonotic diseases have continued to be a major threat to global health and the economy. Developing countries are particularly vulnerable due to agricultural expansions and the domestication of animals by humans. Scientifically sound clinical trials are important to find better ways to prevent, diagnose, and treat zoonotic diseases, while there is a lack of evidence to inform the clinical trials’ capacity and practice in countries highly affected by the diseases. This study aimed to investigate researchers’ perceptions and experiences in conducting clinical trials on zoonotic diseases in Ethiopia. Methods: This study employed a descriptive, qualitative study design. It included major academic and research institutions in Ethiopia that had active engagements in veterinary and public health research. It included the National Veterinary Institute, the National Animal Health Diagnostic and Investigation Center, the College of Veterinary Medicine at Addis Ababa University, the Ethiopian Public Health Institute, the Armauer Hansen Research Institute, and the College of Health Sciences at Addis Ababa University. In-depth interviews were conducted with 14 senior researcher investigators in the institutions who hold a proven exhibit primarily leading research activities or research units. Data were collected from October 2019 to April 2020. Data analysis was undertaken using open code 4.03 for qualitative data analysis. Results: Five major themes, with 18 sub-themes, emerged from the in-depth interview in connection. These were: challenges in the prevention, control, and treatment of zoonotic diseases; One Health approach to mitigate zoonotic diseases; personal and institutional experiences in conducting clinical trials on zoonotic diseases; barriers in conducting clinical trials towards zoonotic diseases; and strategies that promote conducting clinical trials on zoonotic diseases. Conducting clinical trials on zoonotic diseases in Ethiopia is hampered by a lack of clearly articulated ethics and regulatory frameworks, trial experts, financial resources, and good governance. Conclusions: In Ethiopia, conducting clinical trials on zoonotic diseases deserves due attention. Strengthening institutional and human resources capacity is a precondition to harnessing effective implementation of clinical trials on zoonotic diseases in the country. In Ethiopia, where skilled human resource is scarce, the One Health approach has the potential to form multidisciplinary teams to systematically improve clinical trials capacity and outcomes in the country.Keywords: Ethiopia, clinical triak, zoonoses, disease
Procedia PDF Downloads 91970 Health Literacy: Collaboration between Clinician and Patient
Authors: Cathy Basterfield
Abstract:
Issue: To engage in one’s own health care, health professionals need to be aware of an individual’s specific skills and abilities for best communication. One of the most discussed is health literacy. One of the assumed skills and abilities for adults is an individuals’ health literacy. Background: A review of publicly available health content appears to assume all adult readers will have a broad and full capacity to read at a high level of literacy, often at a post-school education level. Health information writers and clinicians need to recognise one critical area for why there may be little or no change in a person’s behaviour, or no-shows to appointments. Perhaps unintentionally, they are miscommunicating with the majority of the adult population. Health information contains many literacy domains. It usually includes technical medical terms or jargon. Many fact sheets and other information require scientific literacy with or without specific numerical literacy. It may include graphs, percentages, timing, distance, or weights. Each additional word or concept in these domains decreases the readers' ability to meaningfully read, understand and know what to do with the information. An attempt to begin to read the heading where long or unfamiliar words are used will reduce the readers' motivation to attempt to read. Critically people who have low literacy are overwhelmed when pages are covered with lots of words. People attending a health environment may be unwell or anxious about a diagnosis. These make it harder to read, understand and know what to do with the information. But access to health information must consider an even wider range of adults, including those with poor school attainment, migrants, and refugees. It is also homeless people, people with mental health illnesses, or people who are ageing. People with low literacy also may include people with lifelong disabilities, people with acquired disabilities, people who read English as a second (or third) language, people who are Deaf, or people who are vision impaired. Outcome: This paper will discuss Easy English, which is developed for adults. It uses the audiences’ everyday words, short sentences, short words, and no jargon. It uses concrete language and concrete, specific images to support the text. It has been developed in Australia since the mid-2000s. This paper will showcase various projects in the health domain which use Easy English to improve the understanding and functional use of written information for the large numbers of adults in our communities who do not have the health literacy to manage a range of day to day reading tasks. See examples from consent forms, fact sheets and choice options, instructions, and other functional documents, where Easy English has been developed. This paper will ask individuals to reflect on their own work practice and consider what written information must be available in Easy English. It does not matter how cutting-edge a new treatment is; when adults can not read or understand what it is about and the positive and negative outcomes, they are less likely to be engaged in their own health journey.Keywords: health literacy, inclusion, Easy English, communication
Procedia PDF Downloads 124969 Exploration of the Psychological Aspect of Empowerment of Marginalized Women Working in the Unorganized Sector of Metropolis City
Authors: Sharmistha Chanda, Anindita Chaudhuri
Abstract:
This exploratory study highlights the psychological aspects of women's empowerment to find the importance of the psychological dimension of empowerment, such as; meaning, competence, self-determination, impact, and assumption, especially in the weaker marginalized section of women. A large proportion of rural, suburban, and urban poor survive by working in unorganized sectors of metropolitan cities. Relative Poverty and lack of employment in rural areas and small towns drive many people to the metropolitan city for work and livelihood. Women working in that field remain unrecognized as people of low socio-economic status. They are usually willing to do domestic work as daily wage workers, single wage earners, street vendors, family businesses like agricultural activities, domestic workers, and self-employed. Usually, these women accept such jobs because they do not have such an opportunity as they lack the basic level of education that is required for better-paid jobs. The unorganized sector, on the other hand, has no such clear-cut employer-employee relationships and lacks most forms of social protection. Having no fixed employer, these workers are casual, contractual, migrant, home-based, own-account workers who attempt to earn a living from whatever meager assets and skills they possess. Women have become more empowered both financially and individually through small-scale business ownership or entrepreneurship development and in household-based work. In-depth interviews have been done with 10 participants in order to understand their living styles, habits, self-identity, and empowerment in their society in order to evaluate the key challenges that they may face following by qualitative research approach. Transcription has been done from the collected data. The three-layer coding technique guides the data analysis process, encompassing – open coding, axial coding, and selective coding. Women’s Entrepreneurship is one of the foremost concerns as the Government, and non-government institutions are readily serving this domain with the primary objectives of promoting self-employment opportunities in general and empowering women in specific. Thus, despite hardship and unrecognition unorganized sector provides a huge array of opportunities for rural and sub-urban poor to earn. Also, the upper section of society tends to depend on this working force. This study gave an idea about the well-being, and meaning in life, life satisfaction on the basis of their lived experience.Keywords: marginalized women, psychological empowerment, relative poverty, and unorganized sector.
Procedia PDF Downloads 57968 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review
Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed
Abstract:
Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy
Procedia PDF Downloads 129967 Furniko Flour: An Emblematic Traditional Food of Greek Pontic Cuisine
Authors: A. Keramaris, T. Sawidis, E. Kasapidou, P. Mitlianga
Abstract:
Although the gastronomy of the Greeks of Pontus is highly prominent, it has not received the same level of scientific analysis as another local cuisine of Greece, that of Crete. As a result, we intended to focus our research on Greek Pontic cuisine to shed light on its unique recipes, food products, and, ultimately, its features. The Greeks of Pontus, who lived for a long time in the northern part (Black Sea Region) of contemporary Turkey and now widely inhabit northern Greece, have one of Greece's most distinguished local cuisines. Despite their gastronomy being simple, it features several inspiring delicacies. It's been a century since they immigrated to Greece, yet their gastronomic culture remains a critical component of their collective identity. As a first step toward comprehending Greek Pontic cuisine, it was attempted to investigate the production of one of its most renowned traditional products, furniko flour. In this project, we targeted residents of Western Macedonia, a province in northern Greece with a large population of descendants of Greeks of Pontus who are primarily engaged in agricultural activities. In this quest, we approached a descendant of the Greeks of Pontus who is involved in the production of furniko flour and who consented to show us the entire process of its production as we participated in it. The furniko flour is made from non-hybrid heirloom corn. It is harvested by hand when the moisture content of the seeds is low enough to make them suitable for roasting. Manual harvesting entails removing the cob from the plant and detaching the husks. The harvested cobs are then roasted for 24 hours in a traditional wood oven. The roasted cobs are then collected and stored in sacks. The next step is to extract the seeds, which is accomplished by rubbing the cobs. The seeds should ideally be ground in a traditional stone hand mill. We end up with aromatic and dark golden furniko flour, which is used to cook havitz. Accompanied by the preparation of the furnikoflour, we also recorded the cooking process of the havitz (a porridge-like cornflour dish). A savory delicacy that is simple to prepare and one of the most delightful dishes in Greek Pontic cuisine. According to the research participant, havitzis a highly nutritious dish due to the ingredients of furniko flour. In addition, he argues that preparing havitz is a great way to bring families together, share stories, and revisit fond memories. In conclusion, this study illustrates the traditional preparation of furnikoflour and its use in various traditional recipes as an initial effort to highlight the elements of Pontic Greek cuisine. As a continuation of the current study, it could be the analysis of the chemical components of the furniko flour to evaluate its nutritional content.Keywords: furniko flour, greek pontic cuisine, havitz, traditional foods
Procedia PDF Downloads 135966 Evidence-Based Policy Making to Improve Human Security in Pakistan
Authors: Ayesha Akbar
Abstract:
Pakistan is moving from a security state to a welfare state despite several security challenges both internal and external. Human security signifies a varied approach in different regions depending upon the leadership and policy priorities. The link between human development and economic growth is not automatic. It has to be created consciously by forward-looking policies and strategies by national governments. There are seven components or categories of human security these include: Economic Security, Personal Security, Health Security, Environmental Security, Food Security, Community Security and Political Security. The increasing interest of the international community to clearly understand the dimensions of human security provided the grounds to Pakistani scholars as well to ponder on the issue and delineate lines of human security. A great deal of work has been either done or in process to evaluate human security indicators in Pakistan. Notwithstanding, after having been done a great deal of work the human security in Pakistan is not satisfactory. A range of deteriorating indicators of human development that lies under the domain of human security leaves certain inquiries to be answered. What are the dimensions of human security in Pakistan? And how are they being dealt from the perspective of policy and institution in terms of its operationalization in Pakistan? Is the human security discourse reflects evidence-based policy changes. The methodology is broadly based on qualitative methods that include interviews, content analysis of policy documents. Pakistan is among the most populous countries in the world and faces high vulnerability to climate change. Literacy rate has gone down with the surge of youth bulge to accommodate in the job market. Increasing population is creating food problems as the resources have not been able to compete with the raising demands of food and other social amenities of life. Majority of the people are facing acute poverty. Health outcomes are also not satisfactory with the high infant and maternal mortality rate. Pakistan is on the verge of facing water crisis as the water resources are depleting so fast with the high demand in agriculture and energy sector. Pakistan is striving hard to deal with the declining state of human security but the dilemma is lack of resources that hinders in meeting up with the emerging demands. The government requires to bring about more change with scaling-up economic growth avenues with enhancing the capacity of human resources. A modern performance drive culture with the integration of technology is required to deliver efficient and effective service delivery. On an already fast track process of reforms; e-governance and evidence based policy mechanism is being instilled in the government process for better governance and evidence based decisions.Keywords: governance, human development index, human security, Pakistan, policy
Procedia PDF Downloads 252965 Optimising Apparel Digital Production in Industrial Clusters
Authors: Minji Seo
Abstract:
Fashion stakeholders are becoming increasingly aware of technological innovation in manufacturing. In 2020, the COVID-19 pandemic caused transformations in working patterns, such as working remotely rather thancommuting. To enable smooth remote working, 3D fashion design software is being adoptedas the latest trend in design and production. The majority of fashion designers, however, are still resistantto this change. Previous studies on 3D fashion design software solely highlighted the beneficial and detrimental factors of adopting design innovations. They lacked research on the relationship between resistance factors and the adoption of innovation. These studies also fell short of exploringthe perspectives of users of these innovations. This paper aims to investigate the key drivers and barriers of employing 3D fashion design software as wellas to explore the challenges faced by designers.It also toucheson the governmental support for digital manufacturing in Seoul, South Korea, and London, the United Kingdom. By conceptualising local support, this study aims to provide a new path for industrial clusters to optimise digital apparel manufacturing. The study uses a mixture of quantitative and qualitative approaches. Initially, it reflects a survey of 350 samples, fashion designers, on innovation resistance factors of 3D fashion design software and the effectiveness of local support. In-depth interviews with 30 participants provide a better understanding of designers’ aspects of the benefits and obstacles of employing 3D fashion design software. The key findings of this research are the main barriers to employing 3D fashion design software in fashion production. The cultural characteristics and interviews resultsare used to interpret the survey results. The findings of quantitative data examine the main resistance factors to adopting design innovations. The dominant obstacles are: the cost of software and its complexity; lack of customers’ interest in innovation; lack of qualified personnel, and lack of knowledge. The main difference between Seoul and London is the attitudes towards government support. Compared to the UK’s fashion designers, South Korean designers emphasise that government support is highly relevant to employing 3D fashion design software. The top-down and bottom-up policy implementation approach distinguishes the perception of government support. Compared to top-down policy approaches in South Korea, British fashion designers based on employing bottom-up approaches are reluctant to receive government support. The findings of this research will contribute to generating solutions for local government and the optimisation of use of 3D fashion design software in fashion industrial clusters.Keywords: digital apparel production, industrial clusters, innovation resistance, 3D fashion design software, manufacturing, innovation, technology, digital manufacturing, innovative fashion design process
Procedia PDF Downloads 100964 Knowledge of Sexually Transmitted Infections and Socio-Demographic Factors Affecting High Risk Sex among Unmarried Youths in Nigeria
Authors: Obasanjo Afolabi Bolarinwa
Abstract:
This study assesses the levels of knowledge of sexually transmitted infections among unmarried youths in Nigeria; examines the pattern of high risk sex among unmarried youths in Nigeria; investigate the socio-demographic factors (age, place of residence, religion, level of education, wealth index and employment status) affecting the practice of high-risk sexual behaviour and ascertain the relationships between knowledge of sexually transmitted infections and practice of high risk sex. The goal of the study is to identify the factors associated with the practice of high risk sex among youth. These were with a view to identifying critical actions needed to reduce high risk sexual behaviour among youths. The study employed secondary data. The data for the study were extracted from the 2013 Nigeria Demographic and Health Survey (NDHS). The 2013 NDHS collected information from 38,948 Women ages 15-49 years and 17,359 men ages 15-49. A total of 7,744 female and 6,027 male respondents were utilized in the study. In order to adjust for the effect of oversampling of the population, the weighting factor provided by Measure DHS was applied. The data were analysed using frequency distribution and logistic regression. The results show that both male (92.2%) and female (93.6%) have accurate knowledge of sexually transmitted infections. The study also revealed that prevalence of high risk sexual behavior is high among Nigerian youths; this is evident as 77.7% (female) and 78.4% (male) are engaging in high risk sexual behavior. The bivariate analysis shows that age of respondent (χ2=294.2; p < 0.05), religion (χ2=136.64; p < 0.05), wealth index (χ2=17.38; p < 0.05), level of education (χ2=34.73; p < 0.05) and employment status (χ2=94.54; p < 0.05) were individual factors significantly associated with high risk sexual behaviour among male while age of respondent (χ2=327.07; p < 0.05), place of residence (χ2=6.71; p < 0.05), religion (χ2=81.04; p < 0.05), wealth index (χ2=7.41; p < 0.05), level of education (χ2=18.12; p < 0.05) and employment status (χ2=51.02; p < 0.05) were individual factors significantly associated with high risk sexual behaviour among female. Furthermore, the study shows that there is a relationship between knowledge of sexually transmitted infections and high risk sex among male (χ2=38.32; p < 0.05) and female (χ2=18.37; p < 0.05). At multivariate level, the study revealed that individual characteristics such as age, religion, place of residence, wealth index, levels of education and employment status were statistically significantly related with high risk sexual behaviour among male and female (p < 0.05). Lastly, the study shows that knowledge of sexually transmitted infection was significantly related to high risk sexual behaviour among youths (p < 0.05). The study concludes that there is a high level of knowledge of sexually transmitted infections among unmarried youths in Nigeria. The practice of high risk sex is high among unmarried youths but higher among male youths. The prevalence of high risk sexual activity is higher for males when they are at disadvantage and higher for females when they are at advantage. Socio-demographic factors like age of respondents, religion, wealth index, place of residence, employment status and highest level of education are factors influencing high risk sexual behaviour among youths.Keywords: high risk sex, wealth index, sexual behaviour, knowledge
Procedia PDF Downloads 253963 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study
Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno
Abstract:
The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade
Procedia PDF Downloads 211962 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution
Authors: Peter G. Hollis, Kim G. Clarke
Abstract:
Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag
Procedia PDF Downloads 265961 Kidnapping of Migrants by Drug Cartels in Mexico as a New Trend in Contemporary Slavery
Authors: Itze Coronel Salomon
Abstract:
The rise of organized crime and violence related to drug cartels in Mexico has created serious challenges for the authorities to provide security to those who live within its borders. However, to achieve a significant improvement in security is absolute respect for fundamental human rights by the authorities. Irregular migrants in Mexico are at serious risk of abuse. Research by Amnesty International as well as reports of the NHRC (National Human Rights) in Mexico, have indicated the major humanitarian crisis faced by thousands of migrants traveling in the shadows. However, the true extent of the problem remains invisible to the general population. The fact that federal and state governments leave no proper record of abuse and do not publish reliable data contributes to ignorance and misinformation, often spread by the media that portray migrants as the source of crime rather than their victims. Discrimination and intolerance against irregular migrants can generate greater hostility and exclusion. According to the modus operandi that has been recorded criminal organizations and criminal groups linked to drug trafficking structures deprive migrants of their liberty for forced labor and illegal activities related to drug trafficking, even some have been kidnapped for be trained as murderers . If the victim or their family cannot pay the ransom, the kidnapped person may suffer torture, mutilation and amputation of limbs or death. Migrant women are victims of sexual abuse during her abduction as well. In 2011, at least 177 bodies were identified in the largest mass grave found in Mexico, located in the town of San Fernando, in the border state of Tamaulipas, most of the victims were killed by blunt instruments, and most seemed to be immigrants and travelers passing through the country. With dozens of small graves discovered in northern Mexico, this may suggest a change in tactics between organized crime groups to the different means of obtaining revenue and reduce murder profile methods. Competition and conflict over territorial control drug trafficking can provide strong incentives for organized crime groups send signals of violence to the authorities and rival groups. However, as some Mexican organized crime groups are increasingly looking to take advantage of income and vulnerable groups, such as Central American migrants seem less interested in advertising his work to authorities and others, and more interested in evading detection and confrontation. This paper pretends to analyze the introduction of this new trend of kidnapping migrants for forced labors by drug cartels in Mexico into the forms of contemporary slavery and its implications.Keywords: international law, migration, transnational organized crime
Procedia PDF Downloads 416960 Developing an Online Application for Mental Skills Training and Development
Authors: Arjun Goutham, Chaitanya Sridhar, Sunita Maheshwari, Robin Uthappa, Prasanna Gopinath
Abstract:
In alignment with the growth in the sporting industry, a number of people playing and competing in sports are growing exponentially across the globe. However, the number of sports psychology experts are not growing at a similar rate, especially in the Asian and more so, Indian context. Hence, the access to actionable mental training solutions specific to individual athletes is limited. Also, the time constraint an athlete faces due to their intense training schedule makes one-on-one sessions difficult. One of the means to bridge that gap is through technology. Technology makes individualization possible. It allows for easy access to specific-qualitative content/information and provides a medium to place individualized assessments, analysis, solutions directly into an athlete's hands. This enables mental training awareness, education, and real-time actionable solutions possible for athletes in-spite of the limitation of available sports psychology experts in their region. Furthermore, many athletes are hesitant to seek support due to the stigma of appearing weak. Such individuals would prefer a more discreet way. Athletes who have strong mental performance tend to produce better results. The mobile application helps to equip athletes with assessing and developing their mental strategies directed towards improving performance on an ongoing basis. When an athlete understands their strengths and limitations in their mental application, they can focus specifically on applying the strategies that work and improve on zones of limitation. With reports, coaches get to understand the unique inner workings of an athlete and can utilize the data & analysis to coach them with better precision and use coaching styles & communication that suits better. Systematically capturing data and supporting athletes(with individual-specific solutions) or teams with assessment, planning, instructional content, actionable tools & strategies, reviewing mental performance and the achievement of objectives & goals facilitate for a consistent mental skills development at all levels of sporting stages of an athlete's career. The mobile application will help athletes recognize and align with their stable attributes such as their personalities, learning & execution modalities, challenges & requirements of their sport, etc and help develop dynamic attributes like states, beliefs, motivation levels, focus etc. with practice and training. It will provide measurable analysis on a regular basis and help them stay aligned to their objectives & goals. The solutions are based on researched areas of influence on sporting performance individually or in teams.Keywords: athletes, mental training, mobile application, performance, sports
Procedia PDF Downloads 267