Search results for: common property resources
668 Analysis of the Development of Mining Companies Social Corporate Responsibility Based on the Rating Score
Authors: Tatiana Ponomarenko, Oksana Marinina, Marina Nevskaya
Abstract:
Modern corporate social responsibility (CSR) is a sphere of multilevel responsibility of a company toward society represented by various stakeholders. The relevance of CSR management grows due to the active development of socially responsible investing (principles for responsible investment) taking into account factors of environmental, social and corporate governance (ESG), growing attention of the investment community in general to the long-term stability of companies and the quality of control of nonfinancial risks. The modern approach to CSR strategic management is aimed at the creation of trustful relationships with stakeholders, on the basis of which a contribution to the sustainable development of companies, regions, and national economics is insured. However, the practical concepts of social responsibility in mining companies are different, which leads to various degrees of application of CSR. A number of companies implement CSR using a traditional (limited) understanding of responsibility toward employees and counteragents, the others understand CSR much wider and try to use leverages of efficient cooperation. As in large mining companies the scope of CSR measures is diverse and characterized by different indices, the study was aimed at evaluating CSR efficiency on the basis of a proprietary methodology and determining the level of development of CSR management in terms of anti-crisis, reactive and proactive development. The methodology of the research includes analysis of integrated global reporting initiative (GRI) reports of large mining companies; choice of most representative sectoral agents by a criterion of the regularity of issuance and publication of reports; calculation of indices of evaluation of CSR level of the selected companies in dynamics. The methodology of evaluation of CSR level is based on a rating score of changes in standard indices of GRI reports by economic, environmental, and social directions. Result. By the results of the analysis, companies of fuel and energy and metallurgic complexes, in overwhelming majority, reflecting three indices out of a wide range of possible indicators of SDGs (Sustainable Development Goals), were selected for the study. The evaluation of the scopes of CSR of the companies Gazprom, LUKOIL, Metalloinvest, Nornikel, Rosneft, Severstal, SIBUR, SUEK corresponds to the reactive type of development according to a scale of CSR strategic management, which is the average value out of the possible values. The chief drawback is that companies, in the process of analyzing global goals, often choose the goals which relate to their own activities, paying insufficient attention to the interests of the stakeholders inside the country. This fact evidences the necessity of searching for more effective mechanisms of CSR control. Acknowledgment: This article is prepared within grant support of the RFBR, project 19-510-44013 'Development of the concept of mineral resources value formation in the context of sustainable development in resource-oriented economies'.Keywords: sustainable development, corporate social responsibility, development strategies, efficiency assessment
Procedia PDF Downloads 135667 Diagnostic Yield of CT PA and Value of Pre Test Assessments in Predicting the Probability of Pulmonary Embolism
Authors: Shanza Akram, Sameen Toor, Heba Harb Abu Alkass, Zainab Abdulsalam Altaha, Sara Taha Abdulla, Saleem Imran
Abstract:
Acute pulmonary embolism (PE) is a common disease and can be fatal. The clinical presentation is variable and nonspecific, making accurate diagnosis difficult. Testing patients with suspected acute PE has increased dramatically. However, the overuse of some tests, particularly CT and D-dimer measurement, may not improve care while potentially leading to patient harm and unnecessary expense. CTPA is the investigation of choice for PE. Its easy availability, accuracy and ability to provide alternative diagnosis has lowered the threshold for performing it, resulting in its overuse. Guidelines have recommended the use of clinical pretest probability tools such as ‘Wells score’ to assess risk of suspected PE. Unfortunately, implementation of guidelines in clinical practice is inconsistent. This has led to low risk patients being subjected to unnecessary imaging, exposure to radiation and possible contrast related complications. Aim: To study the diagnostic yield of CT PA, clinical pretest probability of patients according to wells score and to determine whether or not there was an overuse of CTPA in our service. Methods: CT scans done on patients with suspected P.E in our hospital from 1st January 2014 to 31st December 2014 were retrospectively reviewed. Medical records were reviewed to study demographics, clinical presentation, final diagnosis, and to establish if Wells score and D-Dimer were used correctly in predicting the probability of PE and the need for subsequent CTPA. Results: 100 patients (51male) underwent CT PA in the time period. Mean age was 57 years (24-91 years). Majority of patients presented with shortness of breath (52%). Other presenting symptoms included chest pain 34%, palpitations 6%, collapse 5% and haemoptysis 5%. D Dimer test was done in 69%. Overall Wells score was low (<2) in 28 %, moderate (>2 - < 6) in 47% and high (> 6) in 15% of patients. Wells score was documented in medical notes of only 20% patients. PE was confirmed in 12% (8 male) patients. 4 had bilateral PE’s. In high-risk group (Wells > 6) (n=15), there were 5 diagnosed PEs. In moderate risk group (Wells >2 - < 6) (n=47), there were 6 and in low risk group (Wells <2) (n=28), one case of PE was confirmed. CT scans negative for PE showed pleural effusion in 30, Consolidation in 20, atelactasis in 15 and pulmonary nodule in 4 patients. 31 scans were completely normal. Conclusion: Yield of CT for pulmonary embolism was low in our cohort at 12%. A significant number of our patients who underwent CT PA had low Wells score. This suggests that CT PA is over utilized in our institution. Wells score was poorly documented in medical notes. CT-PA was able to detect alternative pulmonary abnormalities explaining the patient's clinical presentation. CT-PA requires concomitant pretest clinical probability assessment to be an effective diagnostic tool for confirming or excluding PE. . Clinicians should use validated clinical prediction rules to estimate pretest probability in patients in whom acute PE is being considered. Combining Wells scores with clinical and laboratory assessment may reduce the need for CTPA.Keywords: CT PA, D dimer, pulmonary embolism, wells score
Procedia PDF Downloads 233666 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer
Authors: Ankan Roy, Niharika, Samir Kumar Patra
Abstract:
Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions
Procedia PDF Downloads 132665 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya
Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui
Abstract:
Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.Keywords: multi-sectional approach, equity, people-centered, health workforce retention
Procedia PDF Downloads 114664 The Last National Anthem of the Ottoman Empire: Musical Code, Sociopolitical Control and Historical Realities
Authors: Nuray Ocakli
Abstract:
19th century was the era of changes and transformations for the Ottoman Empire. The first sultan of this century, Mahmud II (1808-1839), was the architect of Ottoman modernization and fundamental changes. The most radical of these was abolishing the Janissary corps and the traditional Ottoman military band, Mehteran. Mahmud II introduced modernized military corps as well as western style royal and military music. Mahmut II invited the Italian composer Giuseppe Donizetti to establish a modern military band for the new army and to compose the Sultan’s royal anthem. In 1828, Donizetti composed the first western-style Ottoman anthem, Mahmudiyye anthem. During the 19th and early 20th century, four other western style Ottoman anthems (Aziziyye, Mecidiyye, Hamidiyye, and Resadiyye) were composed but the last anthem adopted in the reign of Mehmet VI (r. 1918-1922) was again Mahmudiyye anthem. This paper aims to analyze the Mahmudiyye anthem composed as royal anthem in 1828 but adopted as national anthem in 1918. Research questions of this paper are as follows: What were the characteristics of the Mahmudiyye anthem making it the best choice of the last sultan for the last national anthem? Are there specific reasons of the last sultan to adopt Mahmudiyye anthem or not to adopt any of the other four anthems? The musical characteristics of the anthem are analyzed based on the Cerulo’s empirical research. Cerulo examined the musical structures of 124 western style anthems from 150 countries in the 1580-1976 period. Cerulo’s research categorizes musical codes of the anthems as basic and embellished related with the level of sociopolitical control. Musical analysis of the anthem indicates that the basic musical code of the anthem implies a high level of socio-political control during the reign of both Mahmut II and Mehmet VI. Historical analysis of each sultans’ reign shows that both sultans were autocratic. Mahmut II designed authoritarian government policies to suppress possible reactions against his reforms. On the other hand, authoritarian policies of Mehmet VI are related with the domestic and international political conditions following the World War I. Historical analysis of the research questions show that compared to the other western style Ottoman anthems, Mahmudiyye anthem remained the only neutral anthem symbolizing modernization and westernization of the empire. Other anthems were all the symbols of failed ideologies such as Ottomanism, pan-Islamism, and pan-Turkism. In the early 20th century, there were a few common things remained among the diverse communities of the Ottoman Empire: The land they shared as homeland and the idea of modernization to save the homeland. For this reason, the last sultan Mehmet VI adopted Mahmudiyye anthem as the memory of a unified empire under the rule of a powerful and modernist sultan. The last sultan’s reign lasted just for four years, and the Ottoman Empire disintegrated in 1922, but his adaptation of the Mahmudiyye anthem indicates his unifying policies, his attitudes to save the empire and the caliphate.Keywords: Mahmudiyye anthem, musical code, national anthem, Ottoman Empire, royal anthem
Procedia PDF Downloads 205663 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors
Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria
Abstract:
The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels
Procedia PDF Downloads 167662 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks
Authors: Mazarine Roquet, Pierre Dewallef
Abstract:
The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating
Procedia PDF Downloads 85661 Natural Fibers Design Attributes
Authors: Brayan S. Pabón, R. Ricardo Moreno, Edith Gonzalez
Abstract:
Inside the wide Colombian natural fiber set is the banana stem leaf, known as Calceta de Plátano, which is a material present in several regions of the country and is a fiber extracted from the pseudo stem of the banana plant (Musa paradisiaca) as a regular maintenance process. Colombia had a production of 2.8 million tons in 2007 and 2008 corresponding to 8.2% of the international production, number that is growing. This material was selected to be studied because it is not being used by farmers due to it being perceived as a waste from the banana harvest and a propagation pest agent inside the planting. In addition, the Calceta does not have industrial applications in Colombia since there is not enough concrete knowledge that informs us about the properties of the material and the possible applications it could have. Based on this situation the industrial design is used as a link between the properties of the material and the need to transform it into industrial products for the market. Therefore, the project identifies potential design attributes that the banana stem leaf can have for product development. The methodology was divided into 2 main chapters: Methodology for the material recognition: -Data Collection, inquiring the craftsmen experience and bibliography. -Knowledge in practice, with controlled experiments and validation tests. -Creation of design attributes and material profile according to the knowledge developed. Moreover, the Design methodology: -Application fields selection, exploring the use of the attributes and the relation with product functions. -Evaluating the possible fields and selection of the optimum application. -Design Process with sketching, ideation, and product development. Different protocols were elaborated to qualitatively determine some material properties of the Calceta, and if they could be designated as design attributes. Once defined, performed and analyzed the validation protocols, 25 design attributes were identified and classified into 4 attribute categories (Environmental, Functional, Aesthetics and Technical) forming the material profile. Then, 15 application fields were defined based on the relation between functions of product and the use of the Calceta attributes. Those fields were evaluated to measure how much are being used the functional attributes. After fields evaluation, a final field was definedKeywords: banana stem leaf, Calceta de Plátano, design attributes, natural fibers, product design
Procedia PDF Downloads 260660 Unifying RSV Evolutionary Dynamics and Epidemiology Through Phylodynamic Analyses
Authors: Lydia Tan, Philippe Lemey, Lieselot Houspie, Marco Viveen, Darren Martin, Frank Coenjaerts
Abstract:
Introduction: Human respiratory syncytial virus (hRSV) is the leading cause of severe respiratory tract infections in infants under the age of two. Genomic substitutions and related evolutionary dynamics of hRSV are of great influence on virus transmission behavior. The evolutionary patterns formed are due to a precarious interplay between the host immune response and RSV, thereby selecting the most viable and less immunogenic strains. Studying genomic profiles can teach us which genes and consequent proteins play an important role in RSV survival and transmission dynamics. Study design: In this study, genetic diversity and evolutionary rate analysis were conducted on 36 RSV subgroup B whole genome sequences and 37 subgroup A genome sequences. Clinical RSV isolates were obtained from nasopharyngeal aspirates and swabs of children between 2 weeks and 5 years old of age. These strains, collected during epidemic seasons from 2001 to 2011 in the Netherlands and Belgium by either conventional or 454-sequencing. Sequences were analyzed for genetic diversity, recombination events, synonymous/non-synonymous substitution ratios, epistasis, and translational consequences of mutations were mapped to known 3D protein structures. We used Bayesian statistical inference to estimate the rate of RSV genome evolution and the rate of variability across the genome. Results: The A and B profiles were described in detail and compared to each other. Overall, the majority of the whole RSV genome is highly conserved among all strains. The attachment protein G was the most variable protein and its gene had, similar to the non-coding regions in RSV, more elevated (two-fold) substitution rates than other genes. In addition, the G gene has been identified as the major target for diversifying selection. Overall, less gene and protein variability was found within RSV-B compared to RSV-A and most protein variation between the subgroups was found in the F, G, SH and M2-2 proteins. For the F protein mutations and correlated amino acid changes are largely located in the F2 ligand-binding domain. The small hydrophobic phosphoprotein and nucleoprotein are the most conserved proteins. The evolutionary rates were similar in both subgroups (A: 6.47E-04, B: 7.76E-04 substitution/site/yr), but estimates of the time to the most recent common ancestor were much lower for RSV-B (B: 19, A: 46.8 yrs), indicating that there is more turnover in this subgroup. Conclusion: This study provides a detailed description of whole RSV genome mutations, the effect on translation products and the first estimate of the RSV genome evolution tempo. The immunogenic G protein seems to require high substitution rates in order to select less immunogenic strains and other conserved proteins are most likely essential to preserve RSV viability. The resulting G gene variability makes its protein a less interesting target for RSV intervention methods. The more conserved RSV F protein with less antigenic epitope shedding is, therefore, more suitable for developing therapeutic strategies or vaccines.Keywords: drug target selection, epidemiology, respiratory syncytial virus, RSV
Procedia PDF Downloads 414659 Starting the Hospitalization Procedure with a Medicine Combination in the Cardiovascular Department of the Imam Reza (AS) Mashhad Hospital
Authors: Maryamsadat Habibi
Abstract:
Objective: pharmaceutical errors are avoidable occurrences that can result in inappropriate pharmaceutical use, patient harm, treatment failure, increased hospital costs and length of stay, and other outcomes that affect both the individual receiving treatment and the healthcare provider. This study aimed to perform a reconciliation of medications in the cardiovascular ward of Imam Reza Hospital in Mashhad, Iran, and evaluate the prevalence of medication discrepancies between the best medication list created for the patient by the pharmacist and the medication order of the treating physician there. Materials & Methods: The 97 patients in the cardiovascular ward of the Imam Reza Hospital in Mashhad were the subject of a cross-sectional study from June to September of 2021. After giving their informed consent and being admitted to the ward, all patients with at least one underlying condition and at least two medications being taken at home were included in the study. A medical reconciliation form was used to record patient demographics and medical histories during the first 24 hours of admission, and the information was contrasted with the doctors' orders. The doctor then discovered medication inconsistencies between the two lists and double-checked them to separate the intentional from the accidental anomalies. Finally, using SPSS software version 22, it was determined how common medical discrepancies are and how different sorts of discrepancies relate to various variables. Results: The average age of the participants in this study was 57.6915.84 years, with 57.7% of men and 42.3% of women. 95.9% of the patients among these people encountered at least one medication discrepancy, and 58.9% of them suffered at least one unintentional drug cessation. Out of the 659 medications registered in the study, 399 cases (60.54%) had inconsistencies, of which 161 cases (40.35%) involved the intentional stopping of a medication, 123 cases (30.82%) involved the stopping of a medication unintentionally, and 115 cases (28.82%) involved the continued use of a medication by adjusting the dose. Additionally, the category of cardiovascular pharmaceuticals and the category of gastrointestinal medications were found to have the highest medical inconsistencies in the current study. Furthermore, there was no correlation between the frequency of medical discrepancies and the following variables: age, ward, date of visit, type, and number of underlying diseases (P=0.13), P=0.61, P=0.72, P=0.82, P=0.44, and so forth. On the other hand, there was a statistically significant correlation between the number of medications taken at home (P=0.037) and the prevalence of medical discrepancies with gender (P=0.029). The results of this study revealed that 96% of patients admitted to the cardiovascular unit at Imam Reza Hospital had at least one medication error, which was typically an intentional drug discontinuance. According to the study's findings, patients admitted to Imam Reza Hospital's cardiovascular ward have a great potential for identifying and correcting various medication discrepancies as well as for avoiding prescription errors when the medication reconciliation method is used. As a result, it is essential to carry out a precise assessment to achieve the best treatment outcomes and avoid unintended medication discontinuation, unwanted drug-related events, and drug interactions between the patient's home medications and those prescribed in the hospital.Keywords: drug combination, drug side effects, drug incompatibility, cardiovascular department
Procedia PDF Downloads 93658 Evaluating an Educational Intervention to Reduce Pesticide Exposure Among Farmers in Nigeria
Authors: Gift Udoh, Diane S. Rohlman, Benjamin Sindt
Abstract:
BACKGROUND: There is concern regarding the widespread use of pesticides and impacts on public health. Farmers in Nigeria frequently apply pesticides, including organophosphate pesticides which are known neurotoxicants. They receive little guidance on how much to apply or information about safe handling practices. Pesticide poisoning is one of the major hazards that farmers face in Nigeria. Farmers continue to use highly neurotoxic pesticides for agricultural activities. Because farmers receive little or no information on safe handling and how much to apply, they continue to develop severe and mild illnesses caused by high exposures to pesticides. The project aimed to reduce pesticide exposure among rural farmers in Nigeria by identifying hazards associated with pesticide use and developing and pilot testing training to reduce exposures to pesticides utilizing the hierarchy of controls system. METHODS: Information on pesticide knowledge, behaviors, barriers to safety, and prevention methods was collected from farmers in Nigeria through workplace observations, questionnaires, and interviews. Pre and post-surveys were used to measure farmer’s knowledge before and after the delivery of pesticide safety training. Training topics included the benefits and risks of using pesticides, routes of exposure and health effects, pesticide label activity, use and selection of PPE, ways to prevent exposure and information on local resources. The training was evaluated among farmers and changes in knowledge, attitudes and behaviors were collected prior to and following the training. RESULTS: The training was administered to 60 farmers, a mean age of 35, with a range of farming experience (<1 year to > 50 years). There was an overall increase in knowledge after the training. In addition, farmers perceived a greater immediate risk from exposure to pesticides and their perception of their personal risk increased. For example, farmers believed that pesticide risk is greater to children than to adults, recognized that just because a pesticide is put on the market doesn’t mean it is safe, and they were more confident that they could get advice about handling pesticides. Also, there was greater awareness about behaviors that can increase their exposure (mixing pesticides with bare hands, eating food in the field, not washing hands before eating after applying pesticides, walking in fields recently sprayed, splashing pesticides on their clothes, pesticide storage). CONCLUSION: These results build on existing evidence from a 2022 article highlighting the need for pesticide safety training in Nigeria which suggested that pesticide safety educational programs should focus on community-based, grassroots-style, and involve a family-oriented approach. Educating farmers on agricultural safety while letting them share their experiences with their peers is an effective way of creating awareness on the dangers associated with handling pesticides. Also, for rural communities, especially in Nigeria, pesticide safety pieces of training may not be able to reach some locations, so intentional scouting of rural farming communities and delivering pesticide safety training will improve knowledge of pesticide hazards. There is a need for pesticide information centers to be situated in rural farming communities or agro supply stores, which gives rural farmers information.Keywords: pesticide exposure, pesticide safety, nigeria, rural farming, pesticide education
Procedia PDF Downloads 180657 Empowering Indigenous Epistemologies in Geothermal Development
Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui
Abstract:
Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework
Procedia PDF Downloads 187656 Working Memory and Audio-Motor Synchronization in Children with Different Degrees of Central Nervous System's Lesions
Authors: Anastasia V. Kovaleva, Alena A. Ryabova, Vladimir N. Kasatkin
Abstract:
Background: The most simple form of entrainment to a sensory (typically auditory) rhythmic stimulus involves perceiving and synchronizing movements with an isochronous beat with one level of periodicity, such as that produced by a metronome. Children with pediatric cancer usually treated with chemo- and radiotherapy. Because of such treatment, psychologists and health professionals declare cognitive and motor abilities decline in cancer patients. The purpose of our study was to measure working memory characteristics with association with audio-motor synchronization tasks, also involved some memory resources, in children with different degrees of central nervous system lesions: posterior fossa tumors, acute lymphoblastic leukemia, and healthy controls. Methods: Our sample consisted of three groups of children: children treated for posterior fossa tumors (PFT-group, n=42, mean age 12.23), children treated for acute lymphoblastic leukemia (ALL-group, n=11, mean age 11.57) and neurologically healthy children (control group, n=36, mean age 11.67). Participants were tested for working memory characteristics with Cambridge Neuropsychological Test Automated Battery (CANTAB). Pattern recognition memory (PRM) and spatial working memory (SWM) tests were applied. Outcome measures of PRM test include the number and percentage of correct trials and latency (speed of participant’s response), and measures of SWM include errors, strategy, and latency. In the synchronization tests, the instruction was to tap out a regular beat (40, 60, 90 and 120 beats per minute) in synchrony with the rhythmic sequences that were played. This meant that for the sequences with an isochronous beat, participants were required to tap into every auditory event. Variations of inter-tap-intervals and deviations of children’s taps from the metronome were assessed. Results: Analysis of variance revealed the significant effect of group (ALL, PFT and control) on such parameters as short-term PRM, SWM strategy and errors. Healthy controls demonstrated more correctly retained elements, better working memory strategy, compared to cancer patients. Interestingly that ALL patients chose the bad strategy, but committed significantly less errors in SWM test then PFT and controls did. As to rhythmic ability, significant associations of working memory were found out only with 40 bpm rhythm: the less variable were inter-tap-intervals of the child, the more elements in memory he/she could retain. The ability to audio-motor synchronization may be related to working memory processes mediated by the prefrontal cortex whereby each sensory event is actively retrieved and monitored during rhythmic sequencing. Conclusion: Our results suggest that working memory, tested with appropriate cognitive methods, is associated with the ability to synchronize movements with rhythmic sounds, especially in sub-second intervals (40 per minute).Keywords: acute lymphoblastic leukemia (ALL), audio-motor synchronization, posterior fossa tumor, working memory
Procedia PDF Downloads 300655 Diversity and Inclusion in Focus: Cultivating a Sense of Belonging in Higher Education
Authors: Naziema Jappie
Abstract:
South Africa is a diverse nation but with many challenges. The fundamental changes in the political, economic and educational domains in South Africa in the late 1990s affected the South African community profoundly. In higher education, experiences of discrimination and bias are detrimental to the sense of belonging of staff and students. It is therefore important to cultivate an appreciation of diversity and inclusion. To bridge common understandings with the reality of racial inequality, we must understand the ways in which senior and executive leadership at universities think about social justice issues relating to diversity and inclusion and contextualize these within the current post-democracy landscape. The position and status of social justice issues and initiatives in South African higher education is a slow process. The focus is to highlight how and to what extent initiatives or practices around campus diversity and inclusion have been considered and made part of the mainstream intellectual and academic conversations in South Africa. This involves an examination of the social and epistemological conditions of possibility for meaningful research and curriculum practices, staff and student recruitment, and student access and success in addressing the challenges posed by social diversity on campuses. Methodology: In this study, university senior and executive leadership were interviewed about their perceptions and advancement of social justice and examine the buffering effects of diverse and inclusive peer interactions and institutional commitment on the relationship between discrimination–bias and sense of belonging for staff and students at the institutions. The paper further explores diversity and inclusion initiatives at the three institutions using a Critical Race Theory approach in conjunction with a literature review on social justice with a special focus on diversity and inclusion. Findings: This paper draws on research findings that demonstrate the need to address social justice issues of diversity and inclusion in the SA higher education context. The reason for this is so that university leaders can live out their experiences and values as they work to transform students into being accountable and responsible. Documents were selected for review with the intent of illustrating how diversity and inclusion work being done across an institution can shape the experiences of previously disadvantaged persons at these institutions. The research has highlighted the need for institutional leaders to embody their own mission and vision as they frame social justice issues for the campus community. Finally, the paper provides recommendations to institutions for strengthening high-level diversity and inclusion programs/initiatives among staff, students and administrators. The conclusion stresses the importance of addressing the historical and current policies and practices that either facilitate or negate the goals of social justice, encouraging these privileged institutions to create internal committees or task forces that focus on racial and ethnic disparities in the institution.Keywords: diversity, higher education, inclusion, social justice
Procedia PDF Downloads 122654 Co-Evolution of Urban Lake System and Rapid Urbanization: Case of Raipur, Chhattisgarh
Authors: Kamal Agrawal, Ved Prakash Nayak, Akshay Patil
Abstract:
Raipur is known as a city of water bodies. The city had around 200 man-made and natural lakes of varying sizes. These structures were constructed to collect rainwater and control flooding in the city. Due to the transition from community participation to state government, as well as rapid urbanisation, Raipur now has only about 80 lakes left. Rapid and unplanned growth has resulted in pollution, encroachment, and eutrophication of the city's lakes. The state government keeps these lakes in good condition by cleaning them and proposing lakefront developments. However, maintaining individual lakes is insufficient because urban lakes are not distinct entities. It is a system comprised of the lake, shore, catchment, and other components. While Urban lake system (ULS) is a combination of multiple such lake systems interacting in a complex urban setting. Thus, the project aims to propose a co-evolution model for urban lake systems (ULS) and rapid urbanization in Raipur. The goals are to comprehend the ULS and to identify elements and dimensions of urbanization that influence the ULS. Evaluate the impact of rapid urbanization on the ULS & vice versa in the study area. Determine how to maximize the positive impact while minimizing the negative impact identified in the study area. Propose short-, medium-, and long-term planning interventions to support the ULS's co-evolution with rapid urbanization. A complexity approach is used to investigate the ULS. It is a technique for understanding large, complex systems. A complex system is one with many interconnected and interdependent elements and dimensions. Thus, elements of ULS and rapid urbanization are identified through a literature study to evaluate statements of their impacts (Beneficial/ Adverse) on one another. Rapid urbanization has been identified as having elements such as demography, urban legislation, informal settlement, urban infrastructure, and tourism. Similarly, the catchment area of the lake, the lake's water quality, the water spread area, and lakefront developments are all being impacted by rapid urbanisation. These nine elements serve as parameters for the subsequent analysis. Elements are limited to physical parameters only. The city has designated a study area based on the definition provided by the National Plan for the Conservation of Aquatic Ecosystems. Three lakes are discovered within a one-kilometer radius, establishing a tiny urban lake system. Because the condition of a lake is directly related to the condition of its catchment area, the catchment area of these three lakes is delineated as the study area. Data is collected to identify impact statements, and the interdependence diagram generated between the parameters yields results in terms of interlinking between each parameter and their impact on the system as a whole. The planning interventions proposed for the ULS and rapid urbanisation co-evolution model include spatial proposals as well as policy recommendations for the short, medium, and long term. This study's next step will be to determine how to implement the proposed interventions based on the availability of resources, funds, and governance patterns.Keywords: urban lake system, co-evolution, rapid urbanization, complex system
Procedia PDF Downloads 73653 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem
Authors: Markandeya Kunchi
Abstract:
Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.Keywords: collaborative method, e-commerce, product management method, usability testing
Procedia PDF Downloads 119652 An Investigation of Tetraspanin Proteins’ Role in UPEC Infection
Authors: Fawzyah Albaldi
Abstract:
Urinary tract infections (UTIs) are the most prevalent of infectious diseases and > 80% are caused by uropathogenic E. coli (UPEC). Infection occurs following adhesion to urothelial plaques on bladder epithelial cells, whose major protein constituent are the uroplakins (UPs). Two of the four uroplakins (UPIa and UPIb) are members of the tetraspanin superfamily. The UPEC adhesin FimH is known to interact directly with UPIa. Tetraspanins are a diverse family of transmembrane proteins that generally act as “molecular organizers” by binding different proteins and lipids to form tetraspanin enriched microdomains (TEMs). Previous work by our group has shown that TEMs are involved in the adhesion of many pathogenic bacteria to human cells. Adhesion can be blocked by tetraspanin-derived synthetic peptides, suggesting that tetraspanins may be valuable drug targets. In this study, we investigate the role of tetraspanins in UPEC adherence to bladder epithelial cells. Human bladder cancer cell lines (T24, 5637, RT4), commonly used as in-vitro models to investigate UPEC infection, along with primary human bladder cells, were used in this project. The aim was to establish a model for UPEC adhesion/infection with the objective of evaluating the impact of tetraspanin-derived reagents on this process. Such reagents could reduce the progression of UTI, particularly in patients with indwelling catheters. Tetraspanin expression on the bladder cells was investigated by q-PCR and flow cytometry, with CD9 and CD81 generally highly expressed. Interestingly, despite these cell lines being used by other groups to investigate FimH antagonists, uroplakin proteins (UPIa, UPIb and UPIII) were poorly expressed at the cell surface, although some were present intracellularly. Attempts were made to differentiate the cell lines, to induce cell surface expression of these UPs, but these were largely unsuccessful. Pre-treatment of bladder epithelial cells with anti-CD9 monoclonal antibody significantly decreased UPEC infection, whilst anti-CD81 had no effects. A short (15aa) synthetic peptide corresponding to the large extracellular region (EC2) of CD9 also significantly reduced UPEC adherence. Furthermore, we demonstrated specific binding of that fluorescently tagged peptide to the cells. CD9 is known to associate with a number of heparan sulphate proteoglycans (HSPGs) that have also been implicated in bacterial adhesion. Here, we demonstrated that unfractionated heparin (UFH)and heparin analogs significantly inhibited UPEC adhesion to RT4 cells, as did pre-treatment of the cells with heparinases. Pre-treatment with chondroitin sulphate (CS) and chondroitinase also significantly decreased UPEC adherence to RT4 cells. This study may shed light on a common pathogenicity mechanism involving the organisation of HSPGs by tetraspanins. In summary, although we determined that the bladder cell lines were not suitable to investigate the role of uroplakins in UPEC adhesion, we demonstrated roles for CD9 and cell surface proteoglycans in this interaction. Agents that target these may be useful in treating/preventing UTIs.Keywords: UTIs, tspan, uroplakins, CD9
Procedia PDF Downloads 104651 Current Applications of Artificial Intelligence (AI) in Chest Radiology
Authors: Angelis P. Barlampas
Abstract:
Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses
Procedia PDF Downloads 72650 Managing Type 1 Diabetes in College: A Thematic Analysis of Online Narratives Posted on YouTube
Authors: Ekaterina Malova
Abstract:
Type 1 diabetes (T1D) is a chronic illness requiring immense lifestyle changes to reduce the chance of life-threatening complications. Moving to a college may be the first time for a young adult with T1D to take responsibility for all the aspects of their diabetes care. In addition, people with T1D constantly face stigmatization and discrimination as a result of their health condition, which puts additional pressure on young adults with T1D. Hence, omissions in diabetes self-care often occur during the time of transition to college when both the social and physical environment of young adults changes drastically and contribute to the fact that emerging young adults remain one of the age groups with the highest hemoglobin levels and poorest diabetes control. However, despite potential severe health risks caused by a lack of proper diabetes self-care, little is known about the experiences of emerging adults embarking on a higher education journey as this population. Thus, young adults with type 1 diabetes are a 'forgotten group,' meaning that their experiences are rarely addressed by researchers. Given that self-disclosure and information-seeking can be challenging for individuals with stigmatized illnesses, online platforms like YouTube have become a popular medium of self-disclosure and information-seeking for people living with T1D. Thus, this study aims to provide an analysis of experiences that college students with T1D choose to share with the general public online and explore the nature of information being communicated by college students with T1D to the online community in personal narratives posted on YouTube. A systematic approach was used to retrieve a video sample by searching YouTube with keywords 'type 1 diabetes' and 'college,' with results ordered by relevance. A total of 18 videos were saved. Video lengths ranged from 2 to 28 minutes. The data were coded using NVivo. Video transcripts were coded and analyzed utilizing the thematic analysis method. Three key themes emerged from thematic analysis: 1) Advice, 2) Personal experience, and 3) Things I wish everyone knew about T1D. In addition, Theme 1 was divided into subtopics to differentiate between the most common types of advice: 1) Overcoming stigma and b) Seeking social support. The identified themes indicate that two groups of the population can potentially benefit from watching students’ video testimonies: 1) lay public and 2) other students with T1D. Given that students in the videos reported a lack of T1D education in the lay public, such video narratives can serve important educational purposes and reduce health stigma, while perceived similarity and identification with students in the videos may facilitate the transition of health information to other individuals with T1D and positively affect their diabetes routine. Thus, online video narratives can potentially serve both educational and persuasive purposes, empowering students with T1D to stay in control of T1D while succeeding academically.Keywords: type 1 diabetes, college students, health communication, transition period
Procedia PDF Downloads 156649 Antibacterial Activity of Rosmarinus officinalis (Rosemary) and Murraya koenigii (Curry Leaves) against Multidrug Resistant S. aureus and Coagulase Negative Staphylococcus Species
Authors: Asma Naim, Warda Mushtaq
Abstract:
Staphylococcus species are the most versatile and adaptive organism. They are widespread and naturally found on the skin, mucosa and nose in humans. Among these, Staphylococcus aureus is the most important species. These organisms act as opportunistic pathogens and can infect various organs of the host, causing minor skin infection to severe toxin mediated diseases, and life threatening nosocomial infections. Staphylococcus aureus has acquired resistance against β-lactam antibiotics by the production of β-lactamase, and Methicillin-Resistant Staphylococcus aureus (MRSA) strains have also been reported with increasing frequency. MRSA strains have been associated with nosocomial as well as community acquired infections. Medicinal plants have enormous potential as antimicrobial substances and have been used in traditional medicine. Search for medicinally valuable plants with antimicrobial activity is being emphasized due to increasing antibiotic resistance in bacteria. In the present study, the antibacterial potential of Rosmarinus officinalis (Rosemary) and Murraya koenigii (curry leaves) was evaluated. These are common household herbs used in food as enhancer of flavor and aroma. The crude aqueous infusion, decoction and ethanolic extracts of curry leaves and rosemary and essential oil of rosemary were investigated in the present study for antibacterial activity against multi-drug resistant Staphylococcus strains using well diffusion method. In the present study, 60 Multi-drug resistant clinical isolates of S. aureus (43) and Coagulase Negative Staphylococci (CoNS) (17) were screened against different concentrations of crude extracts of Rosmarinus officinalis and Murraya koenigii. Out of these 60 isolates, 43 were sensitive to the aqueous infusion of rosemary; 23 to aqueous decoction and 58 to ethanolic extract whereas, 24 isolates were sensitive to the essential oil. In the case of the curry leaves, no antibacterial activity was observed in aqueous infusion and decoction while only 14 isolates were sensitive to the ethanolic extract. The aqueous infusion of rosemary (50% concentration) exhibited a zone of inhibition of 21(±5.69) mm. against CoNS and 17(±4.77) mm. against S. aureus, the zone of inhibition of 50% concentration of aqueous decoction of rosemary was also larger against CoNS 17(±5.78) mm. then S. aureus 13(±6.91) mm. and the 50% concentrated ethanolic extract showed almost similar zone of inhibition in S. aureus 22(±3.61) mm. and CoNS 21(±7.64) mm. whereas, the essential oil of rosemary showed greater zone of inhibition against S. aureus i.e., 16(±4.67) mm. while CoNS showed 15(±6.94) mm. These results show that ethanolic extract of rosemary has significant antibacterial activity. Aqueous infusion and decoction of curry leaves revealed no significant antibacterial potential against all Staphylococcal species and ethanolic extract also showed only a weak response. Staphylococcus strains were susceptible to crude extracts and essential oil of rosemary in a dose depend manner, where the aqueous infusion showed highest zone of inhibition and ethanolic extract also demonstrated antistaphylococcal activity. These results demonstrate that rosemary possesses antistaphylococcal activity.Keywords: antibacterial activity, curry leaves, multidrug resistant, rosemary, S. aureus
Procedia PDF Downloads 249648 Molecular Dynamics Simulation Study of Sulfonated Polybenzimidazole Polymers as Promising Forward Osmosis Membranes
Authors: Seyedeh Pardis Hosseini
Abstract:
With increased levels of clean and affordable water scarcity crises in many countries, wastewater treatment has been chosen as a viable method to produce freshwater for various consumptions. Even though reverse osmosis dominates the wastewater treatment market, forward osmosis (FO) processes have significant advantages, such as potentially using a renewable and low-grade energy source and improving water quality. FO is an osmotically driven membrane process that uses a high concentrated draw solution and a relatively low concentrated feed solution across a semi-permeable membrane. Among many novel FO membranes that have been introduced over the past decades, polybenzimidazole (PBI) membranes, a class of aromatic heterocyclic-based polymers, have shown high thermal and chemical stability because of their unique chemical structure. However, the studies reviewed indicate that the hydrophilicity of PBI membranes is comparatively low. Hence, there is an urgent need to develop novel FO membranes with modified PBI polymers to promote hydrophilicity. A few studies have been undertaken to improve the PBI hydrophilicity by fabricating mixed matrix polymeric membranes and surface modification. Thereby, in this study, two different sulfonated polybenzimidazole (SPBI) polymers with the same backbone but different functional groups, namely arylsulfonate PBI (PBI-AS) and propylsulfonate PBI (PBI-PS), are introduced as FO membranes and studied via the molecular dynamics (MD) simulation method. The FO simulation box consists of three distinct regions: a saltwater region, a membrane region, and a pure-water region. The pure-water region is situated at the upper part of the simulation box, while the saltwater region, which contains an aqueous salt solution of Na+ and Cl− ions along with water molecules, occupies the lower part of the simulation box. Specifically, the saltwater region includes 710 water molecules and 24 Na+ and 24 Cl− ions, resulting in a combined concentration of 10 weight percent (wt%). The pure-water region comprises 788 water molecules. Both the saltwater and pure-water regions have a density of 1.0 g/cm³. The membrane region, positioned between the saltwater and pure-water regions, is constructed from three types of polymers: PBI, PBI-AS, and PBI-PS, each consisting of three polymer chains with 30 monomers per chain. The structural and thermophysical properties of the polymers, water molecules, and Na+ and Cl− ions were analyzed using the COMPASS forcefield. All simulations were conducted using the BIOVIA Materials Studio 2020 software. By monitoring the variation in the number of water molecules over the simulation time within the saltwater region, the water permeability of the polymer membranes was calculated and subsequently compared. The results indicated that SPBI polymers exhibited higher water permeability compared to PBI polymers. This enhanced permeability can be attributed to the structural and compositional differences between SPBI and PBI polymers, which likely facilitate more efficient water transport through the membrane. Consequently, the adoption of SPBI polymers in the FO process is anticipated to result in significantly improved performance. This improvement could lead to higher water flux rates, better salt rejection, and overall more efficient use of resources in desalination and water purification applications.Keywords: forward osmosis, molecular dynamics simulation, sulfonated polybenzimidazole, water permeability
Procedia PDF Downloads 29647 Aligning Informatics Study Programs with Occupational and Qualifications Standards
Authors: Patrizia Poscic, Sanja Candrlic, Danijela Jaksic
Abstract:
The University of Rijeka, Department of Informatics participated in the Stand4Info project, co-financed by the European Union, with the main idea of an alignment of study programs with occupational and qualifications standards in the field of Informatics. A brief overview of our research methodology, goals and deliverables is shown. Our main research and project objectives were: a) development of occupational standards, qualification standards and study programs based on the Croatian Qualifications Framework (CROQF), b) higher education quality improvement in the field of information and communication sciences, c) increasing the employability of students of information and communication technology (ICT) and science, and d) continuously improving competencies of teachers in accordance with the principles of CROQF. CROQF is a reform instrument in the Republic of Croatia for regulating the system of qualifications at all levels through qualifications standards based on learning outcomes and following the needs of the labor market, individuals and society. The central elements of CROQF are learning outcomes - competences acquired by the individual through the learning process and proved afterward. The place of each acquired qualification is set by the level of the learning outcomes belonging to that qualification. The placement of qualifications at respective levels allows the comparison and linking of different qualifications, as well as linking of Croatian qualifications' levels to the levels of the European Qualifications Framework and the levels of the Qualifications framework of the European Higher Education Area. This research has made 3 proposals of occupational standards for undergraduate study level (System Analyst, Developer, ICT Operations Manager), and 2 for graduate (master) level (System Architect, Business Architect). For each occupational standard employers have provided a list of key tasks and associated competencies necessary to perform them. A set of competencies required for each particular job in the workplace was defined and each set of competencies as described in more details by its individual competencies. Based on sets of competencies from occupational standards, sets of learning outcomes were defined and competencies from the occupational standard were linked with learning outcomes. For each learning outcome, as well as for the set of learning outcomes, it was necessary to specify verification method, material, and human resources. The task of the project was to suggest revision and improvement of the existing study programs. It was necessary to analyze existing programs and determine how they meet and fulfill defined learning outcomes. This way, one could see: a) which learning outcomes from the qualifications standards are covered by existing courses, b) which learning outcomes have yet to be covered, c) are they covered by mandatory or elective courses, and d) are some courses unnecessary or redundant. Overall, the main research results are: a) completed proposals of qualification and occupational standards in the field of ICT, b) revised curricula of undergraduate and master study programs in ICT, c) sustainable partnership and association stakeholders network, d) knowledge network - informing the public and stakeholders (teachers, students, and employers) about the importance of CROQF establishment, and e) teachers educated in innovative methods of teaching.Keywords: study program, qualification standard, occupational standard, higher education, informatics and computer science
Procedia PDF Downloads 143646 Management of Non-Revenue Municipal Water
Authors: Habib Muhammetoglu, I. Ethem Karadirek, Selami Kara, Ayse Muhammetoglu
Abstract:
The problem of non-revenue water (NRW) from municipal water distribution networks is common in many countries such as Turkey, where the average yearly water losses are around 50% . Water losses can be divided into two major types namely: 1) Real or physical water losses, and 2) Apparent or commercial water losses. Total water losses in Antalya city, Turkey is around 45%. Methods: A research study was conducted to develop appropriate methodologies to reduce NRW. A pilot study area of about 60 thousands inhabitants was chosen to apply the study. The pilot study area has a supervisory control and data acquisition (SCADA) system for the monitoring and control of many water quantity and quality parameters at the groundwater drinking wells, pumping stations, distribution reservoirs, and along the water mains. The pilot study area was divided into 18 District Metered Areas (DMAs) with different number of service connections that ranged between a few connections to less than 3000 connections. The flow rate and water pressure to each DMA were on-line continuously measured by an accurate flow meter and water pressure meter that were connected to the SCADA system. Customer water meters were installed to all billed and unbilled water users. The monthly water consumption as given by the water meters were recorded regularly. Water balance was carried out for each DMA using the well-know standard IWA approach. There were considerable variations in the water losses percentages and the components of the water losses among the DMAs of the pilot study area. Old Class B customer water meters at one DMA were replaced by more accurate new Class C water meters. Hydraulic modelling using the US-EPA EPANET model was carried out in the pilot study area for the prediction of water pressure variations at each DMA. The data sets required to calibrate and verify the hydraulic model were supplied by the SCADA system. It was noticed that a number of the DMAs exhibited high water pressure values. Therefore, pressure reducing valves (PRV) with constant head were installed to reduce the pressure up to a suitable level that was determined by the hydraulic model. On the other hand, the hydraulic model revealed that the water pressure at the other DMAs cannot be reduced when complying with the minimum pressure requirement (3 bars) as stated by the related standards. Results: Physical water losses were reduced considerably as a result of just reducing water pressure. Further physical water losses reduction was achieved by applying acoustic methods. The results of the water balances helped in identifying the DMAs that have considerable physical losses. Many bursts were detected especially in the DMAs that have high physical water losses. The SCADA system was very useful to assess the efficiency level of this method and to check the quality of repairs. Regarding apparent water losses reduction, changing the customer water meters resulted in increasing water revenue by more than 20%. Conclusions: DMA, SCADA, modelling, pressure management, leakage detection and accurate customer water meters are efficient for NRW.Keywords: NRW, water losses, pressure management, SCADA, apparent water losses, urban water distribution networks
Procedia PDF Downloads 406645 Concentrations of Leptin, C-Peptide and Insulin in Cord Blood as Fetal Origins of Insulin Resistance and Their Effect on the Birth Weight of the Newborn
Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha
Abstract:
Obesity is associated with an increased risk of developing insulin resistance. Insulin resistance often progresses to type-2 diabetes mellitus and is linked to a wide variety of other pathophysiological features including hypertension, hyperlipidemia, atherosclerosis (metabolic syndrome) and polycystic ovarian syndrome. Macrosomia is common in infants born to not only women with gestational diabetes mellitus but also non-diabetic obese women. During the past two decades, obesity in children and adolescents has risen significantly in Asian populations including Sri Lanka. There is increasing evidence to believe that infants who are born large for gestational age (LGA) are more likely to be obese in childhood. It is also established from previous studies that Asian populations have higher percentage body fat at a lower body mass index compared to Caucasians. High leptin levels in cord blood have been reported to correlate with fetal adiposity at birth. Previous studies have also shown that cord blood C-peptide and insulin levels are significantly and positively correlated with birth weight. Therefore, the objective of this preliminary study was to determine the relationship between parameters of fetal insulin resistance such as leptin, C-peptide and insulin and the birth weight of the newborn in a study population in Southern Sri Lanka. Umbilical cord blood was collected from 90 newborns and the concentration of insulin, leptin, and C-peptide were measured by ELISA technique. Birth weight, length, occipital frontal, chest, hip and calf circumferences of newborns were measured and characteristics of the mother such as age, height, weight before pregnancy and weight gain were collected. The relationship between insulin, leptin, C-peptide, and anthropometrics were assessed by Pearson’s correlation while the Mann-Whitney U test was used to assess the differences in cord blood leptin, C-peptide, and insulin levels between groups. A significant difference (p < 0.001) was observed between the insulin levels of infants born LGA (18.73 ± 0.64 µlU/ml) and AGA (13.08 ± 0.43 µlU/ml). Consistently, A significant increase in concentration (p < 0.001) was observed in C-peptide levels of infants born LGA (9.32 ± 0.77 ng/ml) compared to AGA (5.44 ± 0.19 ng/ml). Cord blood leptin concentration of LGA infants (12.67 ng/mL ± 1.62) was significantly higher (p < 0.001) compared to the AGA infants (7.10 ng/mL ± 0.97). Significant positive correlations (p < 0.05) were observed among cord leptin levels and the birth weight, pre-pregnancy maternal weight and BMI between the infants of AGA and LGA. Consistently, a significant positive correlation (p < 0.05) was observed between the birth weight and the C peptide concentration. Significantly high concentrations of leptin, C-peptide and insulin levels in the cord blood of LGA infants suggest that they may be involved in regulating fetal growth. Although previous studies suggest comparatively high levels of body fat in the Asian population, values obtained in this study are not significantly different from values previously reported from Caucasian populations. According to this preliminary study, maternal pre-pregnancy BMI and weight may contribute as significant indicators of cord blood parameters of insulin resistance and possibly the birth weight of the newborn.Keywords: large for gestational age, leptin, C-peptide, insulin
Procedia PDF Downloads 158644 Social Factors That Contribute to Promoting and Supporting Resilience in Children and Youth following Environmental Disasters: A Mixed Methods Approach
Authors: Caroline McDonald-Harker, Julie Drolet
Abstract:
Abstract— In the last six years Canada In the last six years Canada has experienced two major and catastrophic environmental disasters– the 2013 Southern Alberta flood and the 2016 Fort McMurray, Alberta wildfire. These two disasters resulted in damages exceeding 12 billion dollars, the costliest disasters in Canadian history. In the aftermath of these disasters, many families faced the loss of homes, places of employment, schools, recreational facilities, and also experienced social, emotional, and psychological difficulties. Children and youth are among the most vulnerable to the devastating effects of disasters due to the physical, cognitive, and social factors related to their developmental life stage. Yet children and youth also have the capacity to be resilient and act as powerful catalyst for change in their own lives and wider communities following disaster. Little is known, particularly from a sociological perspective, about the specific factors that contribute to resilience in children and youth, and effective ways to support their overall health and well-being. This paper focuses on the voices and experiences of children and youth residing in these two disaster-affected communities in Alberta, Canada and specifically examines: 1) How children and youth’s lives are impacted by the tragedy, devastation, and upheaval of disaster; 2) Ways that children and youth demonstrate resilience when directly faced with the adversarial circumstances of disaster; and 3) The cumulative internal and external factors that contribute to bolstering and supporting resilience among children and youth post-disaster. This paper discusses the characteristics associated with high levels of resilience in 183 children and youth ages 5 to 17 based on quantitative and qualitative data obtained through a mix methods approach. Child and youth participants were administered the Children and Youth Resilience Measure (CYRM-28) in order to examine factors that influence resilience processes including: individual, caregiver, and context factors. The CYRM-28 was then supplemented with qualitative interviews with children and youth to contextualize the CYRM-28 resiliency factors and provide further insight into their overall disaster experience. Findings reveal that high levels of resilience among child and youth participants is associated with both individual factors and caregiver factors, specifically positive outlook, effective communication, peer support, and physical and psychological caregiving. Individual and caregiver factors helped mitigate the negative effects of disaster, thus bolstering resilience in children and youth. This paper discusses the implications that these findings have for understanding the specific mechanisms that support the resiliency processes and overall recovery of children and youth following disaster; the importance of bridging the gap between children and youth’s needs and the services and supports provided to them post-disaster; and the need to develop resiliency processes and practices that empower children and youth as active agents of change in their own lives following disaster. These findings contribute to furthering knowledge about pragmatic and representative changes to resources, programs, and policies surrounding disaster response, recovery, and mitigation.Keywords: children and youth, disaster, environment, resilience
Procedia PDF Downloads 125643 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 92642 Innovations and Challenges: Multimodal Learning in Cybersecurity
Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley
Abstract:
There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.Keywords: cybersecurity, new york, city college, graduate degree, master of science
Procedia PDF Downloads 148641 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 111640 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 264639 Investigating the Nature of Transactions Behind Violations Along Bangalore’s Lakes
Authors: Sakshi Saxena
Abstract:
Bangalore is an IT industry-based metropolitan city in the state of Karnataka in India. It has experienced tremendous urbanization at the expense of the environment. The reasons behind development over and near ecologically sensitive areas have been raised by several instances of disappearing lakes. Lakes in Bangalore can be considered commons on both a local and a regional scale and these water bodies are becoming less interconnected because of encroachment in the catchment area. Other sociocultural environmental risks that have led to social issues are now a source of concern. They serve as an example of the transformations in commons, a dilemma that as is transformed from rural to urban areas, as well as the complicated institutional issues associated with governance. According to some scholarly work and ecologists, a nexus of public and commercial institutions is primarily responsible for the depletion of water tanks and the inefficiency of the planning process. It is said that Bangalore's growth as an urban centre, together with the demands it created, particularly on land and water, resulted in the emergence of a middle and upper class that was demanding and self-assured. For the report in focus, it is evident to understand the issues and problems which led to these encroachments and captured violations if any around these lakes and tanks which arose during these decades. To claim watersheds and lake edges as properties, institutional arrangements (organizations, laws, and policies) intersect with planning authorities. Because of unregulated or indiscriminate forms of urbanization, it is claimed that the engagement of actors and negotiations of the process, including government ignorance, are allowing this problem to flourish. In general, the governance of natural resources in India is largely state-based. This is due to the constitutional scheme, which since the Government of India Act, of 1935 has in principle given the power to the states to legislate in this area. Thus, states have the exclusive power to regulate water supplies, irrigation and canals, drainage and embankments, water storage, hydropower, and fisheries. Thus, The main aim is to understand institutional arrangements and the master planning processes behind these arrangements. To understand the ambiguity through an example, it is noted that, Custodianship alone is a role divided between two state and two city-level bodies. This creates regulatory ambiguity and the effects on the environment are such as changes in city temperature, urban flooding, etc. As established, the main kinds of issues around lakes/tanks in Bangalore are encroachment and depletion. This study will further be enhanced by doing a physical survey of three of these lakes focusing on the Bellandur site and the stakeholders involved. According to the study's findings thus far, corrupt politicians and dubious land transaction tools are involved in the real estate industry. It appears that some destruction could have been stopped or at least mitigated in this case if there had been a robust system of urban planning processes involved along with strong institutional arrangements to protect lakes.Keywords: wetlands, lakes, urbanization, bangalore, politics, reservoirs, municipal jurisdiction, lake connections, institutions
Procedia PDF Downloads 78