Search results for: repository
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 148

Search results for: repository

28 Understanding the Cultural Landscape of Kuttanad: Life within the Constraints of Nature

Authors: K. Nikilsha, Lakshmi Manohar, Debayan Chatterjee

Abstract:

Landscape is a setting that informs the way of life of a set of people, and the repository of intangible values and human meanings that nurture our very existence. Along with the linkage that it forms with our lives, it can be argued that landscape and memory cannot be separated, as landscape is the nucleus of our memories. In this context, this paper studies landscape evolution of a region with unique geographic setting, where the dependency of the inhabitants on its resources, led to the formation of certain peculiar beliefs and taboos that formed the basis of a set of unwritten rules and guidelines which they still follow as a part of their lifestyle. One such example is Kuttanad, a low lying region in Kerala which is a complex mosaic of fragmented agricultural landscape incorporating coastal backwaters, rivers, marshes, paddy fields and water channels. The more the physical involvement with the resources, the more was the inhabitants attachment towards it. This attachment of the inhabitants to the place is very strong because the creation of this land was the result of the toil of the low caste labourers who strived day and night to create Kuttanad, which was reclaimed from water with the help of the finance supplied by their landlords. However, the greatest challenge faced by them is posed by the forces of water in the form of floods. As this land is fed by five rivers, even the slight variation in rainfall in its watershed area can cause a large imbalance in the water level causing the reclaimed land to be inundated. The effects of climate change including increase in rainfall, rise in sea level and change of seasons can act as a catalyst to this damage. Hasty urbanization has led to the conversion of paddy fields to housing plots and coconut/plantain fields giving no regard to the traditional systems which had once respected nature and combated floods and draughts through the various cultural practices and taboos practiced by the people. Thus it is essential to look back at the landscape evolution of Kuttanad and to recognise methods used traditionally in the region to establish a cultural landscape, and to understand how climate change and urbanisation shall pose a challenge to the existing landscape and lifestyle. This research also explores the possibilities of alternative and sustainable approaches for resilient urban development learned from Kuttanad as a case study.

Keywords: ecological conservation, landscape and ecological engineering, landscape evolution, man-made landscapes

Procedia PDF Downloads 235
27 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana

Authors: Jemima C. A. Sumboh

Abstract:

Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.

Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage

Procedia PDF Downloads 107
26 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening

Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan

Abstract:

Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.

Keywords: cytotoxicity, high-throughput, impedance, nanomaterials

Procedia PDF Downloads 326
25 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 23
24 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 178
23 Enhancing African Students’ Learning Experience by Creating Multilingual Resources at a South African University of Technology

Authors: Lisa Graham, Kathleen Grant

Abstract:

South Africa is a multicultural country with eleven official languages, yet most of the formal education at institutions of higher education in the country is in English. It is well known that many students, irrespective of their home language, struggle to grasp difficult scientific concepts and the same is true for students enrolled in the Extended Curriculum Programme at the Cape Peninsula University of Technology (CPUT), studying biomedical sciences. Today we are fortunate in that there is a plethora of resources available to students to research and better understand subject matter online. For example, the students often use YouTube videos to supplement the formal education provided in our course. Unfortunately, most of this material is presented in English. The rationale behind this project lies in that it is well documented that students think and grasp concepts easier in their home language and addresses the fact that the lingua franca of instruction in the field of biomedical science is English. A project aimed at addressing the lack of available resources in most of the South African languages is planned, where students studying Bachelor of Health Science in Medical Laboratory Science will collaborate with those studying Film and Video Technology to create educational videos, explaining scientific concepts in their home languages. These videos will then be published on our own YouTube channel, thereby making them accessible to fellow students, future students and anybody with interest in the subject. Research will be conducted to determine the benefit of the project as well as the published videos to the student community. It is suspected that the students engaged in making the videos will benefit in such a way as to gain further understanding of their course content, a broader appreciation of the discipline, an enhanced sense of civic responsibility, as well as greater respect for the different languages and cultures in our classes. Indeed, an increase in student engagement has been shown to play a central role in student success, and it is well noted that deeper learning and more innovative solutions take place in collaborative groups. We aim to make a meaningful contribution towards the production and repository of knowledge in multilingual teaching and learning for the benefit of the diverse student population and staff. This would strengthen language development, multilingualism, and multiculturalism at CPUT and empower and promote African languages as languages of science and education at CPUT, in other institutions of higher learning, and in South Africa as a whole.

Keywords: educational videos, multiculturalism, multilingualism, student engagement

Procedia PDF Downloads 129
22 The Development of an Anaesthetic Crisis Manual for Acute Critical Events: A Pilot Study

Authors: Jacklyn Yek, Clara Tong, Shin Yuet Chong, Yee Yian Ong

Abstract:

Background: While emergency manuals and cognitive aids (CA) have been used in high-hazard industries for decades, this has been a nascent field in healthcare. CAs can potentially offset the large cognitive load involved in crisis resource management and possibly facilitate the efficient performance of key steps in treatment. A crisis manual was developed based on local guidelines and the latest evidence-based information and introduced to a tertiary hospital setting in Singapore. Hence, the objective of this study is to evaluate the effectiveness of the crisis manual in guiding response and management of critical events. Methods: 7 surgical teams were recruited to participate in a series of simulated emergencies in high-fidelity operating room simulator over the period of April to June 2018. All teams consisted of a surgical consultant and medical officer/registrar, anesthesia consultant and medical officer/registrar; as well as a circulating, scrub and anesthetic nurse. Each team performed a simulated operation in which 1 or more of the crisis events occurred. The teams were randomly assigned to a scenario of the crisis manual and all teams were deemed to be equal in experience and knowledge. Before the simulation, teams were instructed on proper checklist use but the use of the checklist was optional. Results: 7 simulation sessions were performed, consisting of the following scenarios: Airway fire, Massive Transfusion Protocol, Malignant Hyperthermia, Eclampsia, and Difficult Airway. Out of the 7 surgical teams, 2 teams made use of the crisis manual – of which both teams had encountered a ‘Malignant Hyperthermia’ scenario. These team members reflected that the crisis manual assisted allowed them to work in a team, especially being able to involve the surgical doctors who were unfamiliar with the condition and management. A run chart plotted showed a possible upward trend, suggesting that with increasing awareness and training, staff would become more likely to initiate the use of the crisis manual. Conclusion: Despite the high volume load in this tertiary hospital, certain crises remain rare and clinicians are often caught unprepared. A crisis manual is an effective tool and easy-to-use repository that can improve patient outcome and encourage teamwork. With training, familiarity would allow clinicians to be increasingly comfortable with reaching out for the crisis manual. More simulation training would need to be conducted to determine its effectiveness.

Keywords: crisis resource management, high fidelity simulation training, medical errors, visual aids

Procedia PDF Downloads 96
21 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans

Authors: Rene Hellmuth

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.

Keywords: building information modeling, digital factory model, factory planning, restructuring

Procedia PDF Downloads 84
20 MigrationR: An R Package for Analyzing Bird Migration Data Based on Satellite Tracking

Authors: Xinhai Li, Huidong Tian, Yumin Guo

Abstract:

Bird migration is fantastic natural phenomenon. In recent years, the use of GPS transmitters has generated a vast amount of data, and the Movebank platform has made these data publicly accessible. For researchers, what they need are data analysis tools. Although there are approximately 90 R packages dedicated to animal movement analysis, the capacity for comprehensive processing of bird migration data remains limited. Hence, we introduce a novel package called migrationR. This package enables the calculation of movement speed, direction, changes in direction, flight duration, daily and annual movement distances. Furthermore, it can pinpoint the starting and ending dates of migration, estimate nest site locations and stopovers, and visualize movement trajectories at various time scales. migrationR distinguishes individuals through NMDS (non-metric multidimensional scaling) coordinates based on movement variables such as speed, flight duration, path tortuosity, and migration timing. A distinctive aspect of the package is the development of a hetero-occurrences species distribution model that takes into account the daily rhythm of individual birds across different landcover types. Habitat use for foraging and roosting differs significantly for many waterbirds. For example, White-naped Cranes at Poyang Lake in China typically forage in croplands and roost in shallow water areas. Both of these occurrence types are of equal importance. Optimal habitats consist of a combination of crop lands and shallow waters, whereas suboptimal habitats lack both, which necessitates birds to fly extensively. With migrationR, we conduct species distribution modeling for foraging and roosting separately and utilize the moving distance between crop lands and shallow water areas as an index of overall habitat suitability. This approach offers a more nuanced understanding of the habitat requirements for migratory birds and enhances our ability to analyze and interpret their movement patterns effectively. The functions of migrationR are demonstrated using our own tracking data of 78 White-naped Crane individuals from 2014 to 2023, comprising over one million valid locations in total. migrationR can be installed from a GitHub repository by executing the following command: remotes::install_github("Xinhai-Li/migrationR").

Keywords: bird migration, hetero-occurrences species distribution model, migrationR, R package, satellite telemetry

Procedia PDF Downloads 33
19 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 45
18 The Effectiveness of Psychosocial Interventions for Survivors of Natural Disasters: A Systematic Review

Authors: Santhani M. Selveindran

Abstract:

Background: Natural disasters are traumatic global events that are becoming increasing more common, with significant psychosocial impact on survivors. This impact results not only in psychosocial distress but, for many, can lead to psychosocial disorders and chronic psychopathology. While there are currently available interventions that seek to prevent and treat these psychosocial sequelae, their effectiveness is uncertain. The evidence-base is emerging with more primary studies evaluating the effectiveness of various psychosocial interventions for survivors of natural disasters, which remains to be synthesized. Aim of Review: To identify, critically appraise and synthesize the current evidence-base on the effectiveness of psychosocial interventions in preventing or treating Post-Traumatic Stress Disorder (PTSD), Major Depressive Disorder (MDD) and/or Generalized Anxiety Disorder (GAD) in adults and children who are survivors of natural disasters. Methods: A protocol was developed as a guide to carry out this review. A systematic search was conducted in eight international electronic databases, three grey literature databases, one dissertation and thesis repository, websites of six humanitarian and non-governmental organizations renowned for their work on natural disasters, as well as bibliographic and citation searching for eligible articles. Papers meeting the specific inclusion criteria underwent quality assessment using the Downs and Black checklist. Data were extracted from the included papers and analysed by way of narrative synthesis. Results: Database and website searching returned 3777 papers where 31 met the criteria for inclusion. Additional 2 papers were obtained through bibliographic and citation searching. Methodological quality of most papers was fair. Twenty-five studies evaluated psychological interventions, five, social interventions whereas three studies evaluated ‘mixed’ psychological and social interventions. All studies, irrespective of methodological quality, reported post-intervention reductions in symptom scores for PTSD, depression and/or anxiety and where assessed, reduced diagnosis of PTSD and MDD, and produced improvements in self-efficacy and quality of life. Statistically significant results were seen in 27 studies. However, three studies demonstrated that the evaluated interventions may not have been very beneficial. Conclusions: The overall positive results suggest that any psychosocial interventions are favourable and should be delivered to all natural disaster survivors, irrespective of age, country, and phase of disaster. Yet, heterogeneity and methodological shortcomings of the current evidence-base makes it difficult to draw definite conclusions needed to formulate categorical guidance or frameworks. Further, rigorously conducted research is needed in this area, although the feasibility of such, given the context and nature of the problem, is also recognized.

Keywords: psychosocial interventions, natural disasters, survivors, effectiveness

Procedia PDF Downloads 125
17 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 196
16 The Role of Anti-corruption Clauses in the Fight Against Corruption in Petroleum Sector

Authors: Azar Mahmoudi

Abstract:

Despite the rise of global anti-corruption movements and the strong emergence of international and national anti-corruption laws, corrupt practices are still prevalent in most places, and countries still struggle to translate these laws into practice. On the other hand, in most countries, political and economic elites oppose anti-corruption reforms. In such a situation, the role of external actors, like the other States, international organizations, and transnational actors, becomes essential. Among them, Transnational Corporations [TNCs] can develop their own regime-like framework to govern their internal activities, and through this, they can contribute to the regimes established by State actors to solve transnational issues. Among various regimes, TNCs may choose to comply with the transnational anti-corruption legal regime to avoid the cost of non-compliance with anti-corruption laws. As a result, they decide to strenghen their anti-corruption compliance as they expand into new overseas markets. Such a decision extends anti-corruption standards among their employees and third-party agents and within their projects across countries. To better address the challenges posed by corruption, TNCs have adopted a comprehensive anti-corruption toolkit. Among the various instruments, anti-corruption clauses have become one of the most anti-corruption means in international commercial agreements. Anti-corruption clauses, acting as a due diligence tool, can protect TNCs against the engagement of third-party agents in corrupt practices and further promote anti-corruption standards among businesses operating across countries. An anti-corruption clause allows parties to create a contractual commitment to exclude corrupt practices during the term of their agreement, including all levels of negotiation and implementation. Such a clause offers companies a mechanism to reduce the risk of potential corruption in their dealings with third parties while avoiding civil and administrative penalties. There have been few attempts to examine the role of anti-corruption clauses in the fight against corruption; therefore, this paper aims to fill this gap and examine anti-corruption clauses in a specific sector where corrupt practices are widespread and endemic, i.e., the petroleum industry. This paper argues that anti-corruption clauses are a positive step in ensuring that the petroleum industry operates in an ethical and transparent manner, helping to reducing the risk of corruption and promote integrity in this sector. Contractual anti-corruption clauses vary in terms of the types commitment, so parties have a wide range of options to choose from for their preferred clauses incorporated within their contracts. This paper intends to propose a categorization of anti-corruption clauses in the petroleum sector. It examines particularly the anti-corruption clauses incorporated in transnational hydrocarbon contracts published by the Resource Contract Portal, an online repository of extractive contracts. Then, this paper offers a quantitative assessment of anti-corruption clauses according to the types of contract, the date of conclusion, and the geographical distribution.

Keywords: anti-corruption, oil and gas, transnational corporations, due diligence, contractual clauses, hydrocarbon, petroleum sector

Procedia PDF Downloads 95
15 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 115
14 Predictors of Motor and Cognitive Domains of Functional Performance after Rehabilitation of Individuals with Acute Stroke

Authors: A. F. Jaber, E. Dean, M. Liu, J. He, D. Sabata, J. Radel

Abstract:

Background: Stroke is a serious health care concern and a major cause of disability in the United States. This condition impacts the individual’s functional ability to perform daily activities. Predicting functional performance of people with stroke assists health care professionals in optimizing the delivery of health services to the affected individuals. The purpose of this study was to identify significant predictors of Motor FIM and of Cognitive FIM subscores among individuals with stroke after discharge from inpatient rehabilitation (typically 4-6 weeks after stroke onset). A second purpose is to explore the relation among personal characteristics, health status, and functional performance of daily activities within 2 weeks of stroke onset. Methods: This study used a retrospective chart review to conduct a secondary analysis of data obtained from the Healthcare Enterprise Repository for Ontological Narration (HERON) database. The HERON database integrates de-identified clinical data from seven different regional sources including hospital electronic medical record systems of the University of Kansas Health System. The initial HERON data extract encompassed 1192 records and the final sample consisted of 207 participants who were mostly white (74%) males (55%) with a diagnosis of ischemic stroke (77%). The outcome measures collected from HERON included performance scores on the National Institute of Health Stroke Scale (NIHSS), the Glasgow Coma Scale (GCS), and the Functional Independence Measure (FIM). The data analysis plan included descriptive statistics, Pearson correlation analysis, and Stepwise regression analysis. Results: significant predictors of discharge Motor FIM subscores included age, baseline Motor FIM subscores, discharge NIHSS scores, and comorbid electrolyte disorder (R2 = 0.57, p <0.026). Significant predictors of discharge Cognitive FIM subscores were age, baseline cognitive FIM subscores, client cooperative behavior, comorbid obesity, and the total number of comorbidities (R2 = 0.67, p <0.020). Functional performance on admission was significantly associated with age (p < 0.01), stroke severity (p < 0.01), and length of hospital stay (p < 0.05). Conclusions: our findings show that younger age, good motor and cognitive abilities on admission, mild stroke severity, fewer comorbidities, and positive client attitude all predict favorable functional outcomes after inpatient stroke rehabilitation. This study provides health care professionals with evidence to evaluate predictors of favorable functional outcomes early at stroke rehabilitation, to tailor individualized interventions based on their client’s anticipated prognosis, and to educate clients about the benefits of making lifestyle changes to improve their anticipated rate of functional recovery.

Keywords: functional performance, predictors, stroke, recovery

Procedia PDF Downloads 119
13 Web-Based Instructional Program to Improve Professional Development: Recommendations and Standards for Radioactive Facilities in Brazil

Authors: Denise Levy, Gian M. A. A. Sordi

Abstract:

This web based project focuses on continuing corporate education and improving workers' skills in Brazilian radioactive facilities throughout the country. The potential of Information and Communication Technologies (ICTs) shall contribute to improve the global communication in this very large country, where it is a strong challenge to ensure high quality professional information to as many people as possible. The main objective of this system is to provide Brazilian radioactive facilities a complete web-based repository - in Portuguese - for research, consultation and information, offering conditions for learning and improving professional and personal skills. UNIPRORAD is a web based system to offer unified programs and inter-related information about radiological protection programs. The content includes the best practices for radioactive facilities in order to meet both national standards and international recommendations published by different organizations over the past decades: International Commission on Radiological Protection (ICRP), International Atomic Energy Agency (IAEA) and National Nuclear Energy Commission (CNEN). The website counts on concepts, definitions and theory about optimization and ionizing radiation monitoring procedures. Moreover, the content presents further discussions related to some national and international recommendations, such as potential exposure, which is currently one of the most important research fields in radiological protection. Only two publications of ICRP develop expressively the issue and there is still a lack of knowledge of fail probabilities, for there are still uncertainties to find effective paths to quantify probabilistically the occurrence of potential exposures and the probabilities to reach a certain level of dose. To respond to this challenge, this project discusses and introduces potential exposures in a more quantitative way than national and international recommendations. Articulating ICRP and AIEA valid recommendations and official reports, in addition to scientific papers published in major international congresses, the website discusses and suggests a number of effective actions towards safety which can be incorporated into labor practice. The WEB platform was created according to corporate public needs, taking into account the development of a robust but flexible system, which can be easily adapted to future demands. ICTs provide a vast array of new communication capabilities and allow to spread information to as many people as possible at low costs and high quality communication. This initiative shall provide opportunities for employees to increase professional skills, stimulating development in this large country where it is an enormous challenge to ensure effective and updated information to geographically distant facilities, minimizing costs and optimizing results.

Keywords: distance learning, information and communication technology, nuclear science, radioactive facilities

Procedia PDF Downloads 171
12 A Geoprocessing Tool for Early Civil Work Notification to Optimize Fiber Optic Cable Installation Cost

Authors: Hussain Adnan Alsalman, Khalid Alhajri, Humoud Alrashidi, Abdulkareem Almakrami, Badie Alguwaisem, Said Alshahrani, Abdullah Alrowaished

Abstract:

Most of the cost of installing a new fiber optic cable is attributed to civil work-trenching-cost. In many cases, information technology departments receive project proposals in their eReview system, but not all projects are visible to everyone. Additionally, if there was no IT scope in the proposed project, it is not likely to be visible to IT. Sometimes it is too late to add IT scope after project budgets have been finalized. Finally, the eReview system is a repository of PDF files for each project, which commits the reviewer to manual work and limits automation potential. This paper details a solution to address the late notification of the eReview system by integrating IT Sites GIS data-sites locations-with land use permit (LUP) data-civil work activity, which is the first step before securing the required land usage authorizations and means no detailed designs for any relevant project before an approved LUP request. To address the manual nature of eReview system, both the LUP System and IT data are using ArcGIS Desktop, which enables the creation of a geoprocessing tool with either Python or Model Builder to automate finding and evaluating potentially usable LUP requests to reduce trenching between two sites in need of a new FOC. To achieve this, a weekly dump was taken from LUP system production data and loaded manually onto ArcMap Desktop. Then a custom tool was developed in model builder, which consisted of a table of two columns containing all the pairs of sites in need of new fiber connectivity. The tool then iterates all rows of this table, taking the sites’ pair one at a time and finding potential LUPs between them, which satisfies the provided search radius. If a group of LUPs was found, an iterator would go through each LUP to find the required civil work between the two sites and the LUP Polyline feature and the distance through the line, which would be counted as cost avoidance if an IT scope had been added. Finally, the tool will export an Excel file named with sites pair, and it will contain as many rows as the number of LUPs, which met the search radius containing trenching and pulling information and cost. As a result, multiple projects have been identified – historical, missed opportunity, and proposed projects. For the proposed project, the savings were about 75% ($750,000) to install a new fiber with the Euclidean distance between Abqaiq GOSP2 and GOSP3 DCOs. In conclusion, the current tool setup identifies opportunities to bundle civil work on single projects at a time and between two sites. More work is needed to allow the bundling of multiple projects between two sites to achieve even more cost avoidance in both capital cost and carbon footprint.

Keywords: GIS, fiber optic cable installation optimization, eliminate redundant civil work, reduce carbon footprint for fiber optic cable installation

Procedia PDF Downloads 196
11 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 54
10 Incorporating Spatial Transcriptome Data into Ligand-Receptor Analyses to Discover Regional Activation in Cells

Authors: Eric Bang

Abstract:

Interactions between receptors and ligands are crucial for many essential biological processes, including neurotransmission and metabolism. Ligand-receptor analyses that examine cell behavior and interactions often utilize cell type-specific RNA expressions from single-cell RNA sequencing (scRNA-seq) data. Using CellPhoneDB, a public repository consisting of ligands, receptors, and ligand-receptor interactions, the cell-cell interactions were explored in a specific scRNA-seq dataset from kidney tissue and portrayed the results with dot plots and heat maps. Depending on the type of cell, each ligand-receptor pair was aligned with the interacting cell type and calculated the positori probabilities of these associations, with corresponding P values reflecting average expression values between the triads and their significance. Using single-cell data (sample kidney cell references), genes in the dataset were cross-referenced with ones in the existing CellPhoneDB dataset. For example, a gene such as Pleiotrophin (PTN) present in the single-cell data also needed to be present in the CellPhoneDB dataset. Using the single-cell transcriptomics data via slide-seq and reference data, the CellPhoneDB program defines cell types and plots them in different formats, with the two main ones being dot plots and heat map plots. The dot plot displays derived measures of the cell to cell interaction scores and p values. For the dot plot, each row shows a ligand-receptor pair, and each column shows the two interacting cell types. CellPhoneDB defines interactions and interaction levels from the gene expression level, so since the p-value is on a -log10 scale, the larger dots represent more significant interactions. By performing an interaction analysis, a significant interaction was discovered for myeloid and T-cell ligand-receptor pairs, including those between Secreted Phosphoprotein 1 (SPP1) and Fibronectin 1 (FN1), which is consistent with previous findings. It was proposed that an effective protocol would involve a filtration step where cell types would be filtered out, depending on which ligand-receptor pair is activated in that part of the tissue, as well as the incorporation of the CellPhoneDB data in a streamlined workflow pipeline. The filtration step would be in the form of a Python script that expedites the manual process necessary for dataset filtration. Being in Python allows it to be integrated with the CellPhoneDB dataset for future workflow analysis. The manual process involves filtering cell types based on what ligand/receptor pair is activated in kidney cells. One limitation of this would be the fact that some pairings are activated in multiple cells at a time, so the manual manipulation of the data is reflected prior to analysis. Using the filtration script, accurate sorting is incorporated into the CellPhoneDB database rather than waiting until the output is produced and then subsequently applying spatial data. It was envisioned that this would reveal wherein the cell various ligands and receptors are interacting with different cell types, allowing for easier identification of which cells are being impacted and why, for the purpose of disease treatment. The hope is this new computational method utilizing spatially explicit ligand-receptor association data can be used to uncover previously unknown specific interactions within kidney tissue.

Keywords: bioinformatics, Ligands, kidney tissue, receptors, spatial transcriptome

Procedia PDF Downloads 117
9 Neurodiversity in Post Graduate Medical Education: A Rapid Solution to Faculty Development

Authors: Sana Fatima, Paul Sadler, Jon Cooper, David Mendel, Ayesha Jameel

Abstract:

Background: Neurodiversity refers to intrinsic differences between human minds and encompasses dyspraxia, dyslexia, attention deficit hyperactivity disorder, dyscalculia, autism spectrum disorder, and Tourette syndrome. There is increasing recognition of neurodiversity in relation to disability/diversity in medical education and the associated impact on training, career progression, and personal and professional wellbeing. In addition, documented and anecdotal evidence suggests that medical educators and training providers in all four nations (UK) are increasingly concerned about understanding neurodiversity and identifying and providing support for neurodivergent trainees. Summary of Work: A national Neurodiversity Task and Finish group were established to survey Health Education England local office Professional Support teams about insights into infrastructure, training for educators, triggers for assessment, resources, and intervention protocols. This group drew from educational leadership, professional and personal neurodiverse expertise, occupational medicine, employer human resource, and trainees. An online, exploratory survey was conducted to gather insights from supervisors and trainers across England using the Professional Support Units' platform. Summary of Results: This survey highlighted marked heterogeneity in the identification, assessment, and approaches to support and management of neurodivergent trainees and highlighted a 'deficit' approach to neurodiversity. It also demonstrated a paucity of educational and protocol resources for educators and supervisors in supporting neurodivergent trainees. Discussions and Conclusions: In phase one, we focused on faculty development. An educational repository for all supervising trainees using a thematic approach was formalised. This was guided by our survey findings specific for neurodiversity and took a triple 'A' approach: awareness, assessment, and action. This is further supported by video material incorporating stories in training as well as mobile workshops for trainers for more immersive learning. The subtle theme from both the survey and Task and finish group suggested a move away from deficit-focused methods toward a positive holistic, interdisciplinary approach within a biopsychosocial framework. Contributions: 1. Faculty Knowledge and basic understanding of neurodiversity are key to supporting trainees with known or underlying Neurodiverse conditions. This is further complicated by challenges around non-disclosure, varied presentations, stigma, and intersectionality. 2. There is national (and international) inconsistency in the approach to how trainees are managed once a neurodiverse condition is suspected or diagnosed. 3. A carefully constituted and focussed Task and Finish group can rapidly identify national inconsistencies in neurodiversity and implement rapid educational interventions. 4. Nuanced findings from surveys and discussion can reframe the approach to neurodiversity; from a medical model to a more comprehensive, asset-based, biopsychosocial model of support, fostering a cultural shift, accepting 'diversity' in all its manifestations, visible and hidden.

Keywords: neurodiversity, professional support, human considerations, workplace wellbeing

Procedia PDF Downloads 69
8 Comprehensive Analysis of RNA m5C Regulator ALYREF as a Suppressive Factor of Anti-tumor Immune and a Potential Tumor Prognostic Marker in Pan-Cancer

Authors: Yujie Yuan, Yiyang Fan, Hong Fan

Abstract:

Objective: The RNA methylation recognition protein Aly/REF export factor (ALYREF) is considered one type of “reader” protein acting as a recognition protein of m5C, has been reported involved in several biological progresses including cancer initiation and progression. 5-methylcytosine (m5C) is a conserved and prevalent RNA modification in all species, as accumulating evidence suggests its role in the promotion of tumorigenesis. It has been claimed that ALYREF mediates nuclear export of mRNA with m5C modification and regulates biological effects of cancer cells. However, the systematical regulatory pathways of ALYREF in cancer tissues have not been clarified, yet. Methods: The expression level of ALYREF in pan-cancer and their normal tissues was compared through the data acquired from The Cancer Genome Atlas (TCGA). The University of Alabama at Birmingham Cancer data analysis Portal UALCAN was used to analyze the relationship between ALYREF and clinical pathological features. The relationship between the expression level of ALYREF and prognosis of pan-cancer, and the correlation genes of ALYREF were figured out by using Gene Expression Correlation Analysis database GEPIA. Immune related genes were obtained from TISIDB (an integrated repository portal for tumor-immune system interactions). Immune-related research was conducted by using Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) and TIMER. Results: Based on the data acquired from TCGA, ALYREF has an obviously higher-level expression in various types of cancers compared with relevant normal tissues excluding thyroid carcinoma and kidney chromophobe. The immunohistochemical images on The Human Protein Atlas showed that ALYREF can be detected in cytoplasm, membrane, but mainly located in nuclear. In addition, a higher expression level of ALYREF in tumor tissue generates a poor prognosis in majority of cancers. According to the above results, cancers with a higher expression level of ALYREF compared with normal tissues and a significant correlation between ALYREF and prognosis were selected for further analysis. By using TISIDB, we found that portion of ALYREF co-expression genes (such as BIRC5, H2AFZ, CCDC137, TK1, and PPM1G) with high Pearson correlation coefficient (PCC) were involved in anti-tumor immunity or affect resistance or sensitivity to T cell-mediated killing. Furthermore, based on the results acquired from GEPIA, there was significant correlation between ALYREF and PD-L1. It was exposed that there is a negative correlation between the expression level of ALYREF and ESTIMATE score. Conclusion: The present study indicated that ALYREF plays a vital and universal role in cancer initiation and progression of pan-cancer through regulating mitotic progression, DNA synthesis and metabolic process, and RNA processing. The correlation between ALYREF and PD-L1 implied ALYREF may affect the therapeutic effect of immunotherapy of tumor. More evidence revealed that ALYREF may play an important role in tumor immunomodulation. The correlation between ALYREF and immune cell infiltration level indicated that ALYREF can be a potential therapeutic target. Exploring the regulatory mechanism of ALYREF in tumor tissues may expose the reason for poor efficacy of immunotherapy and offer more directions of tumor treatment.

Keywords: ALYREF, pan-cancer, immunotherapy, PD-L1

Procedia PDF Downloads 39
7 Exploring Empathy Through Patients’ Eyes: A Thematic Narrative Analysis of Patient Narratives in the UK

Authors: Qudsiya Baig

Abstract:

Empathy yields an unparalleled therapeutic value within patient physician interactions. Medical research is inundated with evidence to support that a physician’s ability to empathise with patients leads to a greater willingness to report symptoms, an improvement in diagnostic accuracy and safety, and a better adherence and satisfaction with treatment plans. Furthermore, the Institute of Medicine states that empathy leads to a more patient-centred care, which is one of the six main goals of a 21st century health system. However, there is a paradox between the theoretical significance of empathy and its presence, or lack thereof, in clinical practice. Recent studies have reported that empathy declines amongst students and physicians over time. The three most impactful contributors to this decline are: (1) disagreements over the definitions of empathy making it difficult to implement it into practice (2) poor consideration or regulation of empathy leading to burnout and thus, abandonment altogether, and (3) the lack of diversity in the curriculum and the influence of medical culture, which prioritises science over patient experience, limiting some physicians from using ‘too much’ empathy in the fear of losing clinical objectivity. These issues were investigated by conducting a fully inductive thematic narrative analysis of patient narratives in the UK to evaluate the behaviours and attitudes that patients associate with empathy. The principal enquiries underpinning this study included uncovering the factors that affected experience of empathy within provider-patient interactions and to analyse their effects on patient care. This research contributes uniquely to this discourse by examining the phenomenon of empathy directly from patients’ experiences, which were systematically extracted from a repository of online patient narratives of care titled ‘CareOpinion UK’. Narrative analysis was specifically chosen as the methodology to examine narratives from a phenomenological lens to focus on the particularity and context of each story. By enquiring beyond the superficial who-whatwhere, the study of narratives prescribed meaning to illness by highlighting the everyday reality of patients who face the exigent life circumstances created by suffering, disability, and the threat of life. The following six themes were found to be the most impactful in influencing the experience of empathy: dismissive behaviours, judgmental attitudes, undermining patients’ pain or concerns, holistic care and failures and successes of communication or language. For each theme there were overarching themes relating to either a failure to understand the patient’s perspective or a success in taking a person-centred approach. An in-depth analysis revealed that a lack of empathy was greatly associated with an emotive-cognitive imbalance, which disengaged physicians with their patients’ emotions. This study hereby concludes that competent providers require a combination of knowledge, skills, and more importantly empathic attitudes to help create a context for effective care. The crucial elements of that context involve (a) identifying empathy clues within interactions to engage with patients’ situations, (b) attributing a perspective to the patient through perspective-taking and (c) adapting behaviour and communication according to patient’s individual needs. Empathy underpins that context, as does an appreciation of narrative, and the two are interrelated.

Keywords: empathy, narratives, person-centred, perspective, perspective-taking

Procedia PDF Downloads 95
6 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 41
5 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 9
4 Librarian Liaisons: Facilitating Multi-Disciplinary Research for Academic Advancement

Authors: Tracey Woods

Abstract:

In the ever-evolving landscape of academia, the traditional role of the librarian has undergone a remarkable transformation. Once considered as custodians of books and gatekeepers of information, librarians have the potential to take on the vital role of facilitators of cross and inter-disciplinary projects. This shift is driven by the growing recognition of the value of interdisciplinary collaboration in addressing complex research questions in pursuit of novel solutions to real-world problems. This paper shall explore the potential of the academic librarian’s role in facilitating innovative, multi-disciplinary projects, both recognising and validating the vital role that the librarian plays in a somewhat underplayed profession. Academic libraries support teaching, the strengthening of knowledge discourse, and, potentially, the development of innovative practices. As the role of the library gradually morphs from a quiet repository of books to a community-based information hub, a potential opportunity arises. The academic librarian’s role is to build knowledge across a wide span of topics, from the advancement of AI to subject-specific information, and, whilst librarians are generally not offered the research opportunities and funding that the traditional academic disciplines enjoy, they are often invited to help build research in support of the academic. This identifies that one of the primary skills of any 21st-century librarian must be the ability to collaborate and facilitate multi-disciplinary projects. In universities seeking to develop research diversity and academic performance, there is an increasing awareness of the need for collaboration between faculties to enable novel directions and advancements. This idea has been documented and discussed by several researchers; however, there is not a great deal of literature available from recent studies. Having a team based in the library that is adept at creating effective collaborative partnerships is valuable for any academic institution. This paper outlines the development of such a project, initiated within and around an identified library-specific need: the replication of fragile special collections for object-based learning. The research was developed as a multi-disciplinary project involving the faculties of engineering (digital twins lab), architecture, design, and education. Centred around methods for developing a fragile archive into a series of tactile objects furthers knowledge and understanding in both the role of the library as a facilitator of projects, chairing and supporting, alongside contributing to the research process and innovating ideas through the bank of knowledge found amongst the staff and their liaising capabilities. This paper shall present the method of project development from the initiation of ideas to the development of prototypes and dissemination of the objects to teaching departments for analysis. The exact replication of artefacts is also balanced with the adaptation and evolutionary speculations initiated by the design team when adapted as a teaching studio method. The dynamic response required from the library to generate and facilitate these multi-disciplinary projects highlights the information expertise and liaison skills that the librarian possesses. As academia embraces this evolution, the potential for groundbreaking discoveries and innovative solutions across disciplines becomes increasingly attainable.

Keywords: Liaison librarian, multi-disciplinary collaborations, library innovations, librarian stakeholders

Procedia PDF Downloads 34
3 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 187
2 Encapsulated Bioflavonoids: Nanotechnology Driven Food Waste Utilization

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits fall into the category of those commercially grown fruits that constitute an excellent repository of phytochemicals with health-promoting properties. Fruits belonging to the citrus family, when processed by industries, produce tons of agriculture by-products in the form of peels, pulp, and seeds, which normally have no further usage and are commonly discarded. In spite of this, such residues are of paramount importance due to their richness in valuable compounds; therefore, agro-waste is considered a valuable bioresource for various purposes in the food sector. A range of biological properties, including anti-oxidative, anti-cancerous, anti-inflammatory, anti-allergenicity, and anti-aging activity, have been reported for these bioactive compounds. Taking advantage of these inexpensive residual sources requires special attention to extract bioactive compounds. Mandarin (Citrus nobilis X Citrus deliciosa) is a potential source of bioflavonoids with antioxidant properties, and it is increasingly regarded as a functional food. Despite these benefits, flavonoids suffer from a barrier of pre-systemic metabolism in gastric fluid, which impedes their effectiveness. Therefore, colloidal delivery systems can completely overcome the barrier in question. This study involved the extraction and identification of key flavonoids from mandarin biomass. Using a green chemistry approach, supercritical fluid extraction at 330 bar, temperature 40C, and co-solvent 10% ethanol was employed for extraction, and the identification of flavonoids was made by mass spectrometry. As flavonoids are concerned with a limitation, the obtained extract was encapsulated in polylactic-co-glycolic acid (PLGA) matrix using a solvent evaporation method. Additionally, the antioxidant potential was evaluated by the 2,2-diphenylpicrylhydrazyl (DPPH) assay. A release pattern of flavonoids was observed over time using simulated gastrointestinal fluids. From the results, it was observed that the total flavonoids extracted from the mandarin biomass were estimated to be 47.3 ±1.06 mg/ml rutin equivalents as total flavonoids. In the extract, significantly, polymethoxyflavones (PMFs), tangeretin and nobiletin were identified, followed by hesperetin and naringin. The designed flavonoid-PLGA nanoparticles exhibited a particle size between 200-250nm. In addition, the bioengineered nanoparticles had a high entrapment efficiency of nearly 80.0% and maintained stability for more than a year. Flavonoid nanoparticles showed excellent antioxidant activity with an IC50 of 0.55μg/ml. Morphological studies revealed the smooth and spherical shape of nanoparticles as visualized by Field emission scanning electron microscopy (FE-SEM). Simulated gastrointestinal studies of free extract and nanoencapsulation revealed the degradation of nearly half of the flavonoids under harsh acidic conditions in the case of free extract. After encapsulation, flavonoids exhibited sustained release properties, suggesting that polymeric encapsulates are efficient carriers of flavonoids. Thus, such technology-driven and biomass-derived products form the basis for their use in the development of functional foods with improved therapeutic potential and antioxidant properties. As a result, citrus processing waste can be considered a new resource that has high value and can be used for promoting its utilization.

Keywords: citrus, agrowaste, flavonoids, nanoparticles

Procedia PDF Downloads 71
1 Kanga Traditional Costume as a Tool for Community Empowerment in Tanzania in Ubuntu perspective - A Literature Review

Authors: Meinrad Haule Lembuka

Abstract:

Introduction: Ubuntu culture represents African humanism with collective and positive feeling of people living together, interdependence, equally and peaceful etc. Overtime, Ubuntu culture developed varieties of communicative strategies to express experiences, feelings and knowledge. Khanga or kanga (garment) is among the Ubuntu cultural practice of Bantu speaking people along the East African coast following interaction with Arabs and Bantu speaking people to formulate Swahili culture. Kanga or Kanga is a Swahili word which means a traditional soft cotton cloths in varieties of colours, patterns, and styles which as a deep cultural, historical, and social significance not only in Tanzania but the rest of East African coast. Swahili culture is a sub culture of Ubuntu African culture which is rich in customs and rituals that serve to preserve goodness and life where Tanzania, like the rest of East African societies along the Indian coast engaged in kanga dressing custom under Swahili culture to express their feelings and knowledge sharing. After the independence of Tanzania (formerly Tanganyika) from British colonial rule, Kanga traditional dressing gained momentum in Swahili culture and spread to the rest of East Africa and beyond. To date kanga dressing holds a good position as a formal and informal tool for advocating marginalised groups, counselling, psychosocial therapy, liberation, compassion, love, justice, campaign, and cerebration etc. Methodology: A literature review method was guided by Ubuntu theory to assess the implications of kanga traditional dressing in empowering Tanzanian community. Findings: During slavery, slaves wore Kaniki and people despised Kaniki dressing due to its association with slavery. Ex-slave women seeking to become part of the Swahili society began to decorate their Kaniki clothes. After slavery was abolished in 1897, Kangas began to be used for self-empowerment and to indicate that the wearer had personal wealth. During colonial era, freedom of expressions for Africans were restricted by colonial masters thus Tanzanians used kanga to express the evils of colonialism and other social problems, Under Ubuntu value of unity and solidarity liberation and independence fighters crafted motto and liberation messages that were shared and spread rapidly in the community. Political parities like TANU used kanga to spread nationalism and Ujamaa policy. kanga is more than a piece of fabric-it is a space for women to voice unspeakable communication and a women-centred repository for indigenous knowledge, feminisms addressing social ills, happiness, campaigns, memories and reconciliation etc. Kanga provides an indirect voice and support vulnerable and marginalised populations and strongly it has proved to be a peaceful platform of capture attention of government and societies. Kanga textiles gained increased international fame when an Obama kanga design was produced upon the president’s election in 2008 and his visit to Tanzania in 2013. Conclusion: Kanga preserves and symbolises Swahili culture and contributes in realization of social justice, inclusion, national identity and unity. As an inclusive cultural tool, Kanga spread across Africa to international community and the practice has moved from being a woman domination dressing code to other sex orientations.

Keywords: African culture, Kanga, khanga, swahili culture, ubuntu

Procedia PDF Downloads 37