Search results for: ethical sensitivity
131 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems
Authors: Moustafa Osman Mohammed
Abstract:
This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing
Procedia PDF Downloads 94130 Empowering Indigenous Epistemologies in Geothermal Development
Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui
Abstract:
Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework
Procedia PDF Downloads 187129 Optimization of the Jatropha curcas Supply Chain as a Criteria for the Implementation of Future Collection Points in Rural Areas of Manabi-Ecuador
Authors: Boris G. German, Edward Jiménez, Sebastián Espinoza, Andrés G. Chico, Ricardo A. Narváez
Abstract:
The unique flora and fauna of The Galapagos Islands has leveraged a tourism-driven growth in the islands. Nonetheless, such development is energy-intensive and requires thousands of gallons of diesel each year for thermoelectric electricity generation. The needed transport of fossil fuels from the continent has generated oil spillages and affectations to the fragile ecosystem of the islands. The Zero Fossil Fuels initiative for The Galapagos proposed by the Ecuadorian government as an alternative to reduce the use of fossil fuels in the islands, considers the replacement of diesel in thermoelectric generators, by Jatropha curcas vegetable oil. However, the Jatropha oil supply cannot entirely cover yet the demand for electricity generation in Galapagos. Within this context, the present work aims to provide an optimization model that can be used as a selection criterion for approving new Jatropha Curcas collection points in rural areas of Manabi-Ecuador. For this purpose, existing Jatropha collection points in Manabi were grouped under three regions: north (7 collection points), center (4 collection points) and south (9 collection points). Field work was carried out in every region in order to characterize the collection points, to establish local Jatropha supply and to determine transportation costs. Data collection was complemented using GIS software and an objective function was defined in order to determine the profit associated to Jatropha oil production. The market price of both Jatropha oil and residual cake, were considered for the total revenue; whereas Jatropha price, transportation and oil extraction costs were considered for the total cost. The tonnes of Jatropha fruit and seed, transported from collection points to the extraction plant, were considered as variables. The maximum and minimum amount of the collected Jatropha from each region constrained the optimization problem. The supply chain was optimized using linear programming in order to maximize the profits. Finally, a sensitivity analysis was performed in order to find a profit-based criterion for the acceptance of future collection points in Manabi. The maximum profit reached a value of $ 4,616.93 per year, which represented a total Jatropha collection of 62.3 tonnes Jatropha per year. The northern region of Manabi had the biggest collection share (69%), followed by the southern region (17%). The criteria for accepting new Jatropha collection points in the rural areas of Manabi can be defined by the current maximum profit of the zone and by the variation in the profit when collection points are removed one at a time. The definition of new feasible collection points plays a key role in the supply chain associated to Jatropha oil production. Therefore, a mathematical model that assists decision makers in establishing new collection points while assuring profitability, contributes to guarantee a continued Jatropha oil supply for Galapagos and a sustained economic growth in the rural areas of Ecuador.Keywords: collection points, Jatropha curcas, linear programming, supply chain
Procedia PDF Downloads 434128 Moving beyond Learner Outcomes: Culturally Responsive Recruitment, Training and Workforce Development
Authors: Tanya Greathosue, Adrianna Taylor, Lori Darnel, Eileen Starr, Susie Ryder, Julie Clockston, Dawn Matera Bassett, Jess Retrum
Abstract:
The United States has an identified need to improve the social work mental and behavioral health workforce shortage with a focus on culturally diverse and responsive mental and behavioral health practitioners to adequately serve its rapidly growing multicultural communities. The U.S. is experiencing rapid demographic changes. Ensuring that mental and behavioral health services are effective and accessible for diverse communities is essential for improving overall health outcomes. In response to this need, we developed a training program focused on interdisciplinary collaboration, evidence-based practices, and culturally responsive services. The success of the training program, funded by the Health Resource Service Administration (HRSA) Behavioral Health Workforce Education and Training (BHWET), has provided the foundation for stage two of our programming. In addition to HRSA/BHWET, we are receiving funding from Colorado Access, a state workforce development initiative, and Kaiser Permanente, a healthcare provider network in the United States. We have moved beyond improved learner outcomes to increasing recruitment of historically excluded, disproportionately mistreated learners, mentorship of students to improve retention, and successful, culturally responsive, diverse workforce development. These authors will utilize a pretest-posttest comparison group design and trend analysis to evaluate the success of the training program. Comparison groups will be matched based on age, gender identification, race, income, as well as prior experience in the field, and time in the degree program. This article describes our culturally responsive training program. Our goals are to increase the recruitment and retention of historically excluded, disproportionately mistreated learners. We achieve this by integrating cultural humility and sensitivity training into educational curricula for our scholars who participate in cohort classroom and seminar learning. Additionally, we provide our community partners who serve as internship sites with ongoing continuing education on how to promote and develop inclusive and supportive work environments for our learners. This work will be of value to mental and behavioral health care practitioners who serve historically excluded and mistreated populations. Participants will learn about culturally informed best practices to increase recruitment and retention of culturally diverse learners. Additionally, participants will hear how to create a culturally responsive training program that encourages an inclusive community for their learners through cohort learning, mentoring, community networking, and critical accountability.Keywords: culturally diverse mental health practitioners, recruitment, mentorship, workforce development, underserved clinics, professional development
Procedia PDF Downloads 26127 Use of Analytic Hierarchy Process for Plant Site Selection
Authors: Muzaffar Shaikh, Shoaib Shaikh, Mark Moyou, Gaby Hawat
Abstract:
This paper presents the use of Analytic Hierarchy Process (AHP) in evaluating the site selection of a new plant by a corporation. Due to intense competition at a global level, multinational corporations are continuously striving to minimize production and shipping costs of their products. One key factor that plays significant role in cost minimization is where the production plant is located. In the U.S. for example, labor and land costs continue to be very high while they are much cheaper in countries such as India, China, Indonesia, etc. This is why many multinational U.S. corporations (e.g. General Electric, Caterpillar Inc., Ford, General Motors, etc.), have shifted their manufacturing plants outside. The continued expansion of the Internet and its availability along with technological advances in computer hardware and software all around the globe have facilitated U.S. corporations to expand abroad as they seek to reduce production cost. In particular, management of multinational corporations is constantly engaged in concentrating on countries at a broad level, or cities within specific countries where certain or all parts of their end products or the end products themselves can be manufactured cheaper than in the U.S. AHP is based on preference ratings of a specific decision maker who can be the Chief Operating Officer of a company or his/her designated data analytics engineer. It serves as a tool to first evaluate the plant site selection criteria and second, alternate plant sites themselves against these criteria in a systematic manner. Examples of site selection criteria are: Transportation Modes, Taxes, Energy Modes, Labor Force Availability, Labor Rates, Raw Material Availability, Political Stability, Land Costs, etc. As a necessary first step under AHP, evaluation criteria and alternate plant site countries are identified. Depending upon the fidelity of analysis, specific cities within a country can also be chosen as alternative facility locations. AHP experience in this type of analysis indicates that the initial analysis can be performed at the Country-level. Once a specific country is chosen via AHP, secondary analyses can be performed by selecting specific cities or counties within a country. AHP analysis is usually based on preferred ratings of a decision-maker (e.g., 1 to 5, 1 to 7, or 1 to 9, etc., where 1 means least preferred and a 5 means most preferred). The decision-maker assigns preferred ratings first, criterion vs. criterion and creates a Criteria Matrix. Next, he/she assigns preference ratings by alternative vs. alternative against each criterion. Once this data is collected, AHP is applied to first get the rank-ordering of criteria. Next, rank-ordering of alternatives is done against each criterion resulting in an Alternative Matrix. Finally, overall rank ordering of alternative facility locations is obtained by matrix multiplication of Alternative Matrix and Criteria Matrix. The most practical aspect of AHP is the ‘what if’ analysis that the decision-maker can conduct after the initial results to provide valuable sensitivity information of specific criteria to other criteria and alternatives.Keywords: analytic hierarchy process, multinational corporations, plant site selection, preference ratings
Procedia PDF Downloads 288126 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA
Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray
Abstract:
Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration
Procedia PDF Downloads 66125 Correlation between Defect Suppression and Biosensing Capability of Hydrothermally Grown ZnO Nanorods
Authors: Mayoorika Shukla, Pramila Jakhar, Tejendra Dixit, I. A. Palani, Vipul Singh
Abstract:
Biosensors are analytical devices with wide range of applications in biological, chemical, environmental and clinical analysis. It comprises of bio-recognition layer which has biomolecules (enzymes, antibodies, DNA, etc.) immobilized over it for detection of analyte and transducer which converts the biological signal into the electrical signal. The performance of biosensor primarily the depends on the bio-recognition layer and therefore it has to be chosen wisely. In this regard, nanostructures of metal oxides such as ZnO, SnO2, V2O5, and TiO2, etc. have been explored extensively as bio-recognition layer. Recently, ZnO has the attracted attention of researchers due to its unique properties like high iso-electric point, biocompatibility, stability, high electron mobility and high electron binding energy, etc. Although there have been many reports on usage of ZnO as bio-recognition layer but to the authors’ knowledge, none has ever observed correlation between optical properties like defect suppression and biosensing capability of the sensor. Here, ZnO nanorods (ZNR) have been synthesized by a low cost, simple and low-temperature hydrothermal growth process, over Platinum (Pt) coated glass substrate. The ZNR have been synthesized in two steps viz. initially a seed layer was coated over substrate (Pt coated glass) followed by immersion of it into nutrient solution of Zinc nitrate and Hexamethylenetetramine (HMTA) with in situ addition of KMnO4. The addition of KMnO4 was observed to have a profound effect over the growth rate anisotropy of ZnO nanostructures. Clustered and powdery growth of ZnO was observed without addition of KMnO4, although by addition of it during the growth, uniform and crystalline ZNR were found to be grown over the substrate. Moreover, the same has resulted in suppression of defects as observed by Normalized Photoluminescence (PL) spectra since KMnO4 is a strong oxidizing agent which provides an oxygen rich growth environment. Further, to explore the correlation between defect suppression and biosensing capability of the ZNR Glucose oxidase (Gox) was immobilized over it, using physical adsorption technique followed by drop casting of nafion. Here the main objective of the work was to analyze effect of defect suppression over biosensing capability, and therefore Gox has been chosen as model enzyme, and electrochemical amperometric glucose detection was performed. The incorporation of KMnO4 during growth has resulted in variation of optical and charge transfer properties of ZNR which in turn were observed to have deep impact on biosensor figure of merits. The sensitivity of biosensor was found to increase by 12-18 times, due to variations introduced by addition of KMnO4 during growth. The amperometric detection of glucose in continuously stirred buffer solution was performed. Interestingly, defect suppression has been observed to contribute towards the improvement of biosensor performance. The detailed mechanism of growth of ZNR along with the overall influence of defect suppression on the sensing capabilities of the resulting enzymatic electrochemical biosensor and different figure of merits of the biosensor (Glass/Pt/ZNR/Gox/Nafion) will be discussed during the conference.Keywords: biosensors, defects, KMnO4, ZnO nanorods
Procedia PDF Downloads 282124 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context
Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati
Abstract:
This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.Keywords: bidirectional process, English, local culture, oppression
Procedia PDF Downloads 268123 Dynamic Changes in NT-proBNP Levels in Unrelated Donors during Hematopoietic Stem Cells Mobilization
Authors: Natalia V. Minaeva, Natalia A. Zorina, Marina N. Khorobrikh, Philipp S. Sherstnev, Tatiana V. Krivokorytova, Alexander S. Luchinin, Maksim S. Minaev, Igor V. Paramonov
Abstract:
Background. Over the last few decades, the Center for International Blood and Marrow Transplant Research (CIBMTR) and the World Marrow Donor Association (WMDA) have been actively working to ensure the safety of the hematopoietic stem cell (HSC) donation process. Registration of adverse events that may occur during the donation period and establishing a relationship between donation and side effects are included in the WMDA international standards. The level of blood serum N-terminal pro-brain natriuretic peptide (NT-proBNP) is an early marker of myocardial stress. Due to the high analytical sensitivity and specificity, laboratory assessment of NT-proBNP makes it possible to objectively diagnose myocardial dysfunction. It is well known that the main stimulus for proBNP synthesis and secretion from atrial and ventricular cardiac myocytes is myocyte stretch and increasement of myocardial extensibility and pressure in the heart chambers. Аim. The aim of the study was to assess the dynamic changes in the levels of blood serum N-terminal pro-brain natriuretic peptide of unrelated donors at various stages of hematopoietic stem cell mobilization. Materials. We have examined 133 unrelated donors, including 92 men and 41 women, that have been included into the study. The NT-proBNP levels were measured before the start of mobilization, then on the day of apheresis, and after the donation of allogeneic HSC. The relationship between NT-proBNP levels and body mass index (BMI), ferritin, hemoglobin, and white blood cells (WBC) levels was assessed on the day of apheresis. The median age of donors was 34 years. Mobilization of HSCs was managed with filgrastim administration at a dose of 10 μg/kg daily for 4-5 days. The first leukocytapheresis was performed on day 4 from the start of filgrastim administration. Quantitative values of the blood serum NT-proBNP level are presented as a median (Me), first and third quartiles (Q1-Q3). Comparative analysis was carried out using the t-test and correlation analysis as well by Spearman method. Results. The baseline blood serum NT-proBNP levels in all 133 donors were within the reference values (<125 pg/ml) and equaled 21,6 (10,0; 43,3) pg/ml. At the same time, the level of NT-proBNP in women was significantly higher than that of men. On the day of the HSC apheresis, a significant increase of blood serum NT-proBNP levels was detected and equald 131,2 (72,6; 165,3) pg/ml (p<0,001), with higher rates in female donors. A statistically significant weak inverse correleation was established between the level of NT-proBNP and the BMI of donors (-0.18, p = 0,03), as well as the level of hemoglobin (-0.33, p <0,001), and ferritin levels (-0.19, p = 0,03). No relationship has been established between the magnitude of WBC levels achieved as a result of the mobilization of HSC on the day of leukocytapheresis. A day after the apheresis, the blood serum NT-proBNP levels still exceeded the reference values, but there was a decreasing tendency. Conclusion. An increase of the blood serum NT-proBNP level in unrelated donors during the mobilization of HSC was established. Future studies should clarify the reason for this phenomenon, as well as its effects on donors' long-term health.Keywords: unrelated donors, mobilization, hematopoietic stem cells, N-terminal pro-brain natriuretic peptide
Procedia PDF Downloads 101122 Analysis of Long-Term Response of Seawater to Change in CO₂, Heavy Metals and Nutrients Concentrations
Authors: Igor Povar, Catherine Goyet
Abstract:
The seawater is subject to multiple external stressors (ES) including rising atmospheric CO2 and ocean acidification, global warming, atmospheric deposition of pollutants and eutrophication, which deeply alter its chemistry, often on a global scale and, in some cases, at the degree significantly exceeding that in the historical and recent geological verification. In ocean systems the micro- and macronutrients, heavy metals, phosphor- and nitrogen-containing components exist in different forms depending on the concentrations of various other species, organic matter, the types of minerals, the pH etc. The major limitation to assessing more strictly the ES to oceans, such as pollutants (atmospheric greenhouse gas, heavy metals, nutrients as nitrates and phosphates) is the lack of theoretical approach which could predict the ocean resistance to multiple external stressors. In order to assess the abovementioned ES, the research has applied and developed the buffer theory approach and theoretical expressions of the formal chemical thermodynamics to ocean systems, as heterogeneous aqueous systems. The thermodynamic expressions of complex chemical equilibria, involving acid-base, complex formation and mineral ones have been deduced. This thermodynamic approach utilizes thermodynamic relationships coupled with original mass balance constraints, where the solid phases are explicitly expressed. The ocean sensitivity to different external stressors and changes in driving factors are considered in terms of derived buffering capacities or buffer factors for heterogeneous systems. Our investigations have proved that the heterogeneous aqueous systems, as ocean and seas are, manifest their buffer properties towards all their components, not only to pH, as it has been known so far, for example in respect to carbon dioxide, carbonates, phosphates, Ca2+, Mg2+, heavy metal ions etc. The derived expressions make possible to attribute changes in chemical ocean composition to different pollutants. These expressions are also useful for improving the current atmosphere-ocean-marine biogeochemistry models. The major research questions, to which the research responds, are: (i.) What kind of contamination is the most harmful for Future Ocean? (ii.) What are chemical heterogeneous processes of the heavy metal release from sediments and minerals and its impact to the ocean buffer action? (iii.) What will be the long-term response of the coastal ocean to the oceanic uptake of anthropogenic pollutants? (iv.) How will change the ocean resistance in terms of future chemical complex processes and buffer capacities and its response to external (anthropogenic) perturbations? The ocean buffer capacities towards its main components are recommended as parameters that should be included in determining the most important ocean factors which define the response of ocean environment at the technogenic loads increasing. The deduced thermodynamic expressions are valid for any combination of chemical composition, or any of the species contributing to the total concentration, as independent state variable.Keywords: atmospheric greenhouse gas, chemical thermodynamics, external stressors, pollutants, seawater
Procedia PDF Downloads 146121 Multi-Agent System Based Distributed Voltage Control in Distribution Systems
Authors: A. Arshad, M. Lehtonen. M. Humayun
Abstract:
With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids
Procedia PDF Downloads 312120 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus Aureus of Isolated from Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia
Authors: Haftay Abraha Tadesse
Abstract:
Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and Public Health Significance of Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Socio-demographic data and Public Health Significance were collected using a predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using an ice box to Mekelle University, College of Veterinary Sciences, for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by the disc diffusion method. Data obtained were cleaned and entered into STATA 22.0 and a logistic regression model with odds ratio was calculated to assess the association of risk factors with bacterial contamination. A P-value < 0.05 was considered statistically significant. Results: In the present study, 88 out of 250 (35.2%) were found to be contaminated with Staphylococcus aureus. Among the raw meat specimens, the positivity rate of Staphylococcus aureus was 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risks, factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35 3.35) was found to be statistically significant and have associated with Staphylococcus aureus contamination. All isolates of thirty-seven of Staphylococcus aureus were checked and displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. Whereas the showed resistance to cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aureus isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin, whereas they showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi-drug resistance pattern for Staphylococcus aureus was 90% and 100% of butchery and abattoir houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the development of hand washing behavior and availability of safe water in the butchery houses to reduce the burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics.Keywords: abattoir house, AMR, butchery house, S. aureus
Procedia PDF Downloads 99119 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 115118 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam
Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck
Abstract:
The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam
Procedia PDF Downloads 248117 Sensitivity Improvement of Optical Ring Resonator for Strain Analysis with the Direction of Strain Recognition Possibility
Authors: Tayebeh Sahraeibelverdi, Ahmad Shirazi Hadi Veladi, Mazdak Radmalekshah
Abstract:
Optical sensors became attractive due to preciseness, low power consumption, and intrinsic electromagnetic interference-free characteristic. Among the waveguide optical sensors, cavity-based ones attended for the high Q-factor. Micro ring resonators as a potential platform have been investigated for various applications as biosensors to pressure sensors thanks to their sensitive ring structure responding to any small change in the refractive index. Furthermore, these small micron size structures can come in an array, bringing the opportunity to have any of the resonance in a specific wavelength and be addressed in this way. Another exciting application is applying a strain to the ring and making them an optical strain gauge where the traditional ones are based on the piezoelectric material. Making them in arrays needs electrical wiring and about fifty times bigger in size. Any physical element that impacts the waveguide cross-section, Waveguide elastic-optic property change, or ring circumference can play a role. In comparison, ring size change has a larger effect than others. Here an engineered ring structure is investigated to study the strain effect on the ring resonance wavelength shift and its potential for more sensitive strain devices. At the same time, these devices can measure any strain by mounting on the surface of interest. The idea is to change the" O" shape ring to a "C" shape ring with a small opening starting from 2π/360 or one degree. We used the Mode solution of Lumbrical software to investigate the effect of changing the ring's opening and the shift induced by applied strain. The designed ring radius is a three Micron silicon on isolator ring which can be fabricated by standard complementary metal-oxide-semiconductor (CMOS) micromachining. The measured wavelength shifts from1-degree opening of the ring to a 6-degree opening have been investigated. Opening the ring for 1-degree affects the ring's quality factor from 3000 to 300, showing an order of magnitude Q-factor reduction. Assuming a strain making the ring-opening from 1 degree to 6 degrees, our simulation results showing negligible Q-factor reduction from 300 to 280. A ring resonator quality factor can reach up to 108 where an order of magnitude reduction is negligible. The resonance wavelength shift showed a blue shift and was obtained to be 1581, 1579,1578,1575nm for 1-, 2-, 4- and 6-degree ring-opening, respectively. This design can find the direction of the strain-induced by applying the opening on different parts of the ring. Moreover, by addressing the specified wavelength, we can precisely find the direction. We can open a significant opportunity to find cracks and any surface mechanical property very specifically and precisely. This idea can be implemented on polymer ring resonators while they can come with a flexible substrate and can be very sensitive to any strain making the two ends of the ring in the slit part come closer or further.Keywords: optical ring resonator, strain gauge, strain sensor, surface mechanical property analysis
Procedia PDF Downloads 127116 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic
Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova
Abstract:
Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification
Procedia PDF Downloads 110115 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics
Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca
Abstract:
The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.Keywords: adulteration, multivariate analysis, potential functions, regression
Procedia PDF Downloads 126114 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis
Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin
Abstract:
Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis
Procedia PDF Downloads 263113 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 361112 Conceptual and Preliminary Design of Landmine Searching UAS at Extreme Environmental Condition
Authors: Gopalasingam Daisan
Abstract:
Landmines and ammunitions have been creating a significant threat to the people and animals, after the war, the landmines remain in the land and it plays a vital role in civilian’s security. Especially the Children are at the highest risk because they are curious. After all, an unexploded bomb can look like a tempting toy to an inquisitive child. The initial step of designing the UAS (Unmanned Aircraft Systems) for landmine detection is to choose an appropriate and effective sensor to locate the landmines and other unexploded ammunitions. The sensor weight and other components related to the sensor supporting device’s weight are taken as a payload weight. The mission requirement is to find the landmines in a particular area by making a proper path that will cover all the vicinity in the desired area. The weight estimation of the UAV (Unmanned Aerial Vehicle) can be estimated by various techniques discovered previously with good accuracy at the first phase of the design. The next crucial part of the design is to calculate the power requirement and the wing loading calculations. The matching plot techniques are used to determine the thrust-to-weight ratio, and this technique makes this process not only easiest but also precisely. The wing loading can be calculated easily from the stall equation. After these calculations, the wing area is determined from the wing loading equation and the required power is calculated from the thrust to weight ratio calculations. According to the power requirement, an appropriate engine can be selected from the available engine from the market. And the wing geometric parameter is chosen based on the conceptual sketch. The important steps in the wing design to choose proper aerofoil and which will ensure to create sufficient lift coefficient to satisfy the requirements. The next component is the tail; the tail area and other related parameters can be estimated or calculated to counteract the effect of the wing pitching moment. As the vertical tail design depends on many parameters, the initial sizing only can be done in this phase. The fuselage is another major component, which is selected based on the slenderness ratio, and also the shape is determined on the sensor size to fit it under the fuselage. The landing gear is one of the important components which is selected based on the controllability and stability requirements. The minimum and maximum wheel track and wheelbase can be determined based on the crosswind and overturn angle requirements. The minor components of the landing gear design and estimation are not the focus of this project. Another important task is to calculate the weight of the major components and it is going to be estimated using empirical relations and also the mass is added to each such component. The CG and moment of inertia are also determined to each component separately. The sensitivity of the weight calculation is taken into consideration to avoid extra material requirements and also reduce the cost of the design. Finally, the aircraft performance is calculated, especially the V-n (velocity and load factor) diagram for different flight conditions such as not disturbed and with gust velocity.Keywords: landmine, UAS, matching plot, optimization
Procedia PDF Downloads 170111 A User-Side Analysis of the Public-Private Partnership: The Case of the New Bundang Subway Line in South Korea
Authors: Saiful Islam, Deuk Jong Bae
Abstract:
The purpose of this study is to examine citizen satisfaction and competitiveness of a Public Private Partnership project. The study focuses on PPP in the transport sector and investigates the New Bundang Subway Line (NBL) in South Korea as the object of a case study. Most PPP studies are dominated by the study of public and private sector interests, which are classified in to three major areas comprising of policy, finance, and management. This study will explore the user perspective by assessing customer satisfaction upon NBL cost and service quality, also the competitiveness of NBL compared to other alternative transport modes which serve the Jeongja – Gangnam trip or vice versa. The regular Bundang Subway Line, New Bundang Subway Line, bus and private vehicle are selected as the alternative transport modes. The study analysed customer satisfaction of NBL and citizen’s preference of alternative transport modes based on a survey in Bundang district, South Korea. Respondents were residents and employees who live or work in Bundang city, and were divided into the following areas Pangyo, Jeongjae – Sunae, Migeun – Ori – Jukjeon, and Imae – Yatap – Songnam. The survey was conducted in January 2015 for two weeks, and 753 responses were gathered. By applying the Hedonic Utility approach, the factors which affect the frequency of using NBL were found to be overall customer satisfaction, convenience of access, and the socio economic demographic of the individual. In addition, by applying the Analytic Hierarchy Process (AHP) method, criteria factors influencing the decision to select alternative transport modes were identified. Those factors, along with the author judgement of alternative transport modes, and their associated criteria and sub-criteria produced a priority list of user preferences regarding their alternative transport mode options. The study found that overall the regular Bundang Subway Line (BL), which was built and operated under a conventional procurement method was selected as the most preferable transport mode due to its cost competitiveness. However, on the sub-criteria level analysis, the NBL has competitiveness on service quality, particularly on journey time. By conducting a sensitivity analysis, the NBL can become the first choice of transport by increasing the NBL’s degree of weight associated with cost by 0,05. This means the NBL would need to reduce either it’s fare cost or transfer fee, or combine those two cost components to reduce the total of the current cost by 25%. In addition, the competitiveness of NBL also could be obtained by increasing NBL convenience through escalating access convenience such as constructing an additional station or providing more access modes. Although these convenience improvements would require a few extra minutes of journey time, the user found this to be acceptable. The findings and policy suggestions can contribute to the next phase of NBL development, showing that consideration should be given to the citizen’s voice. The case study results also contribute to the literature of PPP projects specifically from a user side perspective.Keywords: public private partnership, customer satisfaction, public transport, new Bundang subway line
Procedia PDF Downloads 352110 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 116109 Mycophenolate-Induced Disseminated TB in a PPD-Negative Patient
Authors: Megan L. Srinivas
Abstract:
Individuals with underlying rheumatologic diseases such as dermatomyositis may not adequately respond to tuberculin (PPD) skin tests, creating false negative results. These illnesses are frequently treated with immunosuppressive therapy making proper identification of TB infection imperative. A 59-year-old Filipino man was diagnosed with dermatomyositis on the basis of rash, electromyography, and muscle biopsy. He was initially treated with IVIG infusions and transitioned to oral prednisone and mycophenolate. The patient’s symptoms improved on this regimen. Six months after starting mycophenolate, the patient began having fevers, night sweats, and productive cough without hemoptysis. He moved from the Philippines 5 years prior to dermatomyositis diagnosis, denied sick contacts, and was PPD negative both at immigration and immediately prior to starting mycophenolate treatment. A third PPD was negative following the onset of these new symptoms. He was treated for community-acquired pneumonia, but symptoms worsened over 10 days and he developed watery diarrhea and a growing non-tender, non-mobile mass on the left side of his neck. A chest x-ray demonstrated a cavitary lesion in right upper lobe suspicious for TB that had not been present one month earlier. Chest CT corroborated this finding also exhibiting necrotic hilar and paratracheal lymphadenopathy. Neck CT demonstrated the left-sided mass as cervical chain lymphadenopathy. Expectorated sputum and stool samples contained acid-fast bacilli (AFB), cultures showing TB bacteria. Fine-needle biopsy of the neck mass (scrofula) also exhibited AFB. An MRI brain showed nodular enhancement suspected to be a tuberculoma. Mycophenolate was discontinued and dermatomyositis treatment was switched to oral prednisone with a 3-day course of IVIG. The patient’s infection showed sensitivity to standard RIPE (rifampin, isoniazid, pyrazinamide, and ethambutol) treatment. Within a week of starting RIPE, the patient’s diarrhea subsided, scrofula diminished, and symptoms significantly improved. By the end of treatment week 3, the patient’s sputum no longer contained AFB; he was removed from isolation, and was discharged to continue RIPE at home. He was discharged on oral prednisone, which effectively addressed his dermatomyositis. This case illustrates the unreliability of PPD tests in patients with long-term inflammatory diseases such as dermatomyositis. Other immunosuppressive therapies (adalimumab, etanercept, and infliximab) have been affiliated with conversion of latent TB to disseminated TB. Mycophenolate is another immunosuppressive agent with similar mechanistic properties. Thus, it is imperative that patients with long-term inflammatory diseases and high-risk TB factors initiating immunosuppressive therapy receive a TB blood test (such as a quantiferon gold assay) prior to the initiation of therapy to ensure that latent TB is unmasked before it can evolve into a disseminated form of the disease.Keywords: dermatomyositis, immunosuppressant medications, mycophenolate, disseminated tuberculosis
Procedia PDF Downloads 208108 Carbon Nanotubes (CNTs) as Multiplex Surface Enhanced Raman Scattering Sensing Platforms
Authors: Pola Goldberg Oppenheimer, Stephan Hofmann, Sumeet Mahajan
Abstract:
Owing to its fingerprint molecular specificity and high sensitivity, surface-enhanced Raman scattering (SERS) is an established analytical tool for chemical and biological sensing capable of single-molecule detection. A strong Raman signal can be generated from SERS-active platforms given the analyte is within the enhanced plasmon field generated near a noble-metal nanostructured substrate. The key requirement for generating strong plasmon resonances to provide this electromagnetic enhancement is an appropriate metal surface roughness. Controlling nanoscale features for generating these regions of high electromagnetic enhancement, the so-called SERS ‘hot-spots’, is still a challenge. Significant advances have been made in SERS research, with wide-ranging techniques to generate substrates with tunable size and shape of the nanoscale roughness features. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for miniaturised sensing devices. Carbon nanotubes (CNTs) have been concurrently, a topic of extensive research however, their applications for plasmonics has been only recently beginning to gain interest. CNTs can provide low-cost, large-active-area patternable substrates which, coupled with appropriate functionalization capable to provide advanced SERS-platforms. Herein, advanced methods to generate CNT-based SERS active detection platforms will be discussed. First, a novel electrohydrodynamic (EHD) lithographic technique will be introduced for patterning CNT-polymer composites, providing a straightforward, single-step approach for generating high-fidelity sub-micron-sized nanocomposite structures within which anisotropic CNTs are vertically aligned. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements with each of the EHD-CNTs individual structural units functioning as an isolated sensor. Further, gold-functionalized VACNTFs are fabricated as SERS micro-platforms. The dependence on the VACNTs’ diameters and density play an important role in the Raman signal strength, thus highlighting the importance of structural parameters, previously overlooked in designing and fabricating optimized CNTs-based SERS nanoprobes. VACNTs forests patterned into predesigned pillar structures are further utilized for multiplex detection of bio-analytes. Since CNTs exhibit electrical conductivity and unique adsorption properties, these are further harnessed in the development of novel chemical and bio-sensing platforms.Keywords: carbon nanotubes (CNTs), EHD patterning, SERS, vertically aligned carbon nanotube forests (VACNTF)
Procedia PDF Downloads 333107 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus aureus of Isolated From Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia
Authors: Haftay Abraha Tadesse
Abstract:
Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in human and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and public health significance for Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Sociodemographic data and public health significance were collected using predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using ice box to Mekelle University, College of Veterinary Sciences for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by disc diffusion method. Data obtained were cleaned and entered in to STATA 22.0 and logistic regression model with odds ratio were calculated to assess the association of risk factors with bacterial contamination. P-value < 0.05 was considered as statistically significant. Results: In present study, 88 out of 250 (35.2%) were found to be contamination with Staphylococcus aureus. Among the raw meat specimens to be positivity rate of Staphylococcus aureus were 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risk factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35-3.35) were found to be statistically significant and ha ve associated with Staphylococcus aureus contamination. All isolates thirty sevevn of Staphyloco ccus aureus were checked displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. whereas the showed resistance of cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aur eu isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin whereas the showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi drug resistance pattern for Staphylococcus aureus were 90% and 100% of butchery and abattoirs houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the developed of hand washing behavior, and availability of safe water in the butchery houses to reduce burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always as a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics. Key words: abattoir houses, antimicrobial resistance, butchery houses, Ethiopia,Keywords: abattoir houses, antimicrobial resistance, butchery houses, Ethiopia, staphylococcus aureuse, MDR
Procedia PDF Downloads 75106 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 59105 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 133104 Translation of Self-Inject Contraception Training Objectives Into Service Performance Outcomes
Authors: Oluwaseun Adeleke, Samuel O. Ikani, Simeon Christian Chukwu, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu
Abstract:
Background: Health service providers are offered in-service training periodically to strengthen their ability to deliver services that are ethical, quality, timely and safe. Not all capacity-building courses have successfully resulted in intended service delivery outcomes because of poor training content, design, approach, and ambiance. The Delivering Innovations in Selfcare (DISC) project developed a Moment of Truth innovation, which is a proven training model focused on improving consumer/provider interaction that leads to an increase in the voluntary uptake of subcutaneous depot medroxyprogesterone acetate (DMPA-SC) self-injection among women who opt for injectable contraception. Methodology: Six months after training on a moment of truth (MoT) training manual, the project conducted two intensive rounds of qualitative data collection and triangulation that included provider, client, and community mobilizer interviews, facility observations, and routine program data collection. Respondents were sampled according to a convenience sampling approach, and data collected was analyzed using a codebook and Atlas-TI. Providers and clients were interviewed to understand their experience, perspective, attitude, and awareness about the DMPA-SC self-inject. Data were collected from 12 health facilities in three states – eight directly trained and four cascades trained. The research team members came together for a participatory analysis workshop to explore and interpret emergent themes. Findings: Quality-of-service delivery and performance outcomes were observed to be significantly better in facilities whose providers were trained directly trained by the DISC project than in sites that received indirect training through master trainers. Facilities that were directly trained recorded SI proportions that were twice more than in cascade-trained sites. Direct training comprised of full-day and standalone didactic and interactive sessions constructed to evoke commitment, passion and conviction as well as eliminate provider bias and misconceptions in providers by utilizing human interest stories and values clarification exercises. Sessions also created compelling arguments using evidence and national guidelines. The training also prioritized demonstration sessions, utilized job aids, particularly videos, strengthened empathetic counseling – allaying client fears and concerns about SI, trained on positioning self-inject first and side effects management. Role plays and practicum was particularly useful to enable providers to retain and internalize new knowledge. These sessions provided experiential learning and the opportunity to apply one's expertise in a supervised environment where supportive feedback is provided in real-time. Cascade Training was often a shorter and abridged form of MoT training that leveraged existing training already planned by master trainers. This training was held over a four-hour period and was less emotive, focusing more on foundational DMPA-SC knowledge such as a reorientation to DMPA-SC, comparison of DMPA-SC variants, counseling framework and skills, data reporting and commodity tracking/requisition – no facility practicums. Training on self-injection was not as robust, presumably because they were not directed at methods in the contraceptive mix that align with state/organizational sponsored objectives – in this instance, fostering LARC services. Conclusion: To achieve better performance outcomes, consideration should be given to providing training that prioritizes practice-based and emotive content. Furthermore, a firm understanding and conviction about the value training offers improve motivation and commitment to accomplish and surpass service-related performance outcomes.Keywords: training, performance outcomes, innovation, family planning, contraception, DMPA-SC, self-care, self-injection.
Procedia PDF Downloads 85103 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method
Authors: Jiahui You, Kyung Jae Lee
Abstract:
Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.Keywords: reactive-transport , Shale, Kerogen, precipitation
Procedia PDF Downloads 165102 Inflammatory and Cardio Hypertrophic Remodeling Biomarkers in Patients with Fabry Disease
Authors: Margarita Ivanova, Julia Dao, Andrew Friedman, Neil Kasaci, Rekha Gopal, Ozlem Goker-Alpan
Abstract:
In Fabry disease (FD), α-galactosidase A (α-Gal A) deficiency leads to the accumulation of globotriaosylceramide (Lyso-Gb3 and Gb3), triggering a pathologic cascade that causes the severity of organs damage. The heart is one of the several organs with high sensitivity to the α-Gal A deficiency. A subgroup of patients with significant residual of α-Gal A activity with primary cardiac involvement is occasionally referred to as “cardiac variant.” The cardiovascular complications are most frequently encountered, contributing substantially to morbidity, and are the leading cause of premature death in male and female patients with FD. The deposition of Lyso-Gb-3 and Gb-3 within the myocardium affects cardiac function with resultant progressive cardiovascular pathology. Gb-3 and Lyso-Gb-3 accumulation at the cellular level trigger a cascade of events leading to end-stage fibrosis. In the cardiac tissue, Lyso-Gb-3 deposition is associated with the increased release of inflammatory factors and transforming growth factors. Infiltration of lymphocytes and macrophages into endomyocardial tissue indicates that inflammation plays a significant role in cardiac damage. Moreover, accumulated data suggest that chronic inflammation leads to multisystemic FD pathology even under enzyme replacement therapy (ERT). NF-κB activation plays a subsequent role in the inflammatory response to cardiac dysfunction and advanced heart failure in the general population. TNFalpha/NF-κB signaling protects the myocardial evoking by ischemic preconditioning; however, this protective effect depends on the concentration of TNF-α. Thus, we hypothesize that TNF-α is a critical factor in determining the grade of cardio-pathology. Cardiac hypertrophy corresponds to the expansion of the coronary vasculature to maintain a sufficient supply of nutrients and oxygen. Coronary activation of angiogenesis and fibrosis plays a vital role in cardiac vascularization, hypertrophy, and tissue remodeling. We suggest that the interaction between the inflammatory pathways and cardiac vascularization is a bi-directional process controlled by secreted cytokines and growth factors. The co-coordination of these two processes has never been explored in FD. In a cohort of 40 patients with FD, biomarkers associated with inflammation and cardio hypertrophic remodeling were studied. FD patients were categorized into three groups based on LVmass/DSA, LVEF, and ECG abnormalities: FD with no cardio complication, FD with moderate cardio complication, and severe cardio complication. Serum levels of NF-kB, TNFalpha, Il-6, Il-2, MCP1, ING-gamma, VEGF, IGF-1, TGFβ, and FGF2 were quantified by enzyme-linked immunosorbent assays (ELISA). Among the biomarkers, MCP-1, INF-gamma, VEGF, TNF-alpha, and TGF-beta were elevated in FD patients. Some of these biomarkers also have the potential to correlate with cardio pathology in FD. Conclusion: The study provides information about the role of inflammatory pathways and biomarkers of cardio hypertrophic remodeling in FD patients. This study will also reveal the mechanisms that link intracellular accumulation of Lyso-GB-3 and Gb3 to the development of cardiomyopathy with myocardial thickening and resultant fibrosis.Keywords: biomarkers, Fabry disease, inflammation, growth factors
Procedia PDF Downloads 83