Search results for: common space
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8900

Search results for: common space

3380 Comparative Study Using WEKA for Red Blood Cells Classification

Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine

Procedia PDF Downloads 396
3379 A Systematic Literature Review on the Prevalence of Academic Plagiarism and Cheating in Higher Educational Institutions

Authors: Sozon, Pok Wei Fong, Sia Bee Chuan, Omar Hamdan Mohammad

Abstract:

Owing to the widespread phenomenon of plagiarism and cheating in higher education institutions (HEIs), it is now difficult to ensure academic integrity and quality education. Moreover, the COVID-19 pandemic has intensified the issue by shifting educational institutions into virtual teaching and assessment mode. Thus, there is a need to carry out an extensive and holistic systematic review of the literature to highlight plagiarism and cheating in both prevalence and form among HEIs. This paper systematically reviews the literature concerning academic plagiarism and cheating in HEIs to determine the most common forms and suggest strategies for resolution and boosting the academic integrity of students. The review included 45 articles and publications for the period from February 12, 2018, to September 12, 2022, in the Scopus database aligned with the Systematic Review and Meta-Analysis (PRISMA) guidelines in the selection, filtering, and reporting of the papers for review from which a conclusion can be drawn. Based on the results, out of the studies reviewed, 48% of the quantitative results of students were plagiarized and obtained through cheating, with 84% coming from the fields of Humanities. Moreover, Psychology and Social Sciences studies accumulated 9% and 7% articles respectively. Based on the results, individual factors, institutional factors, and social and cultural factors have contributed to plagiarism and cheating cases in HEIs. The resolution of this issue can be the establishment of ethical and moral development initiatives and modern academic policies and guidelines supported by technological strategies of testing.

Keywords: plagiarism, cheating, systematic review, academic integrity

Procedia PDF Downloads 50
3378 Academic Success, Problem-Based Learning and the Middleman: The Community Voice

Authors: Isabel Medina, Mario Duran

Abstract:

Although Problem-based learning provides students with multiple opportunities for rigorous instructional experiences in which students are challenged to address problems in the community; there are still gaps in connecting community leaders to the PBL process. At a south Texas high school, community participation serves as an integral component of the PBL process. Problem-based learning (PBL) has recently gained momentum due to the increase in global communities that value collaboration and critical thinking. As an instructional approach, PBL engages high school students in meaningful learning experiences. Furthermore, PBL focuses on providing students with a connection to real-world situations that require effective peer collaboration. For PBL leaders, providing students with a meaningful process is as important as the final PBL outcome. To achieve this goal, STEM high school strategically created a space for community involvement to be woven within the PBL fabric. This study examines the impact community members had on PBL students attending a STEM high school in South Texas. At STEM High School, community members represent a support system that works through the PBL process to ensure students receive real-life mentoring from business and industry leaders situated in the community. A phenomenological study using a semi-structured approach was used to collect data about students’ perception of community involvement within the PBL process for one South Texas high school. In our proposed presentation, we will discuss how community involvement in the PBL process academically impacted the educational experience of high school students at STEM high school. We address the instructional concerns PBL critics have with the lack of direct instruction, by providing a representation of how STEM high school utilizes community members to assist in impacting the academic experience of students.

Keywords: phenomenological, STEM education, student engagement, community involvement

Procedia PDF Downloads 78
3377 The Combined Effect of Methane and Methanol on Growth and PHB Production in the Alphaproteobacterial Methanotroph Methylocystis Sp. Rockwell

Authors: Lazic Marina, Sugden Scott, Sharma Kanta Hem, Sauvageau Dominic, Stein Lisa

Abstract:

Methane is a highly potent greenhouse gas mostly released through anthropogenic activities. Methane represents a low-cost and sustainable feedstock used for the biological production of value-added compounds by bacteria known as methanotrophs. In addition to methane, these organisms can utilize methanol, another cheap carbon source that is a common industrial by-product. Alphaproteobacteria methanotrophs can utilize both methane and methanol to produce the biopolymer polyhydroxybutyrate. The goal of this study was to examine the effect of methanol on polyhydroxybutyrate production in Methylocystis sp. Rockwell and to identify the optimal methane: methanol ratio that will improve PHB without reducing biomass production. Three methane: methanol ratios (4, 2.5., and 0.5) and three nitrogen source (ammonium or nitrate) concentrations (10 mM, 1 mM, and 0.1 mM) were combined to generate 18 growing conditions (9 per carbon source). The production of polyhydroxybutyrate and biomass was analyzed at the end of growth. Overall, the methane: methanol ratios that promoted polyhydroxybutyrate synthesis without reducing biomass were 4 and 2.5 and the optimal nitrogen concentration was 1 mM for both ammonium and nitrate. The physiological mechanism behind the beneficial effect of combining methane and methanol as carbon sources remain to be discovered. One possibility is that methanol has a dual role as a carbon source at lower concentrations and as a stringent response trigger at higher concentrations. Nevertheless, the beneficial effect of methanol and optimal nitrogen concentration for PHB production was confirmed, providing a basis for future physiological analysis and conditions for process scale-up.

Keywords: methane, methanol, methanotrophs, polyhydroxybutyrate, methylocystis sp. rockwell, single carbon bioconversions

Procedia PDF Downloads 149
3376 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 149
3375 Neo-Adjuvant B-CAT Chemotherapy in Triple Negative Breast Cancer

Authors: Muneeb Nasir, Misbah Masood, Farrukh Rashid, Abubabakar Shahid

Abstract:

Introduction: Neo-adjuvant chemotherapy is a potent option for triple negative breast cancer (TNBC) as these tumours lack a clearly defined therapeutic target. Several recent studies lend support that pathological complete remission (pCR) is associated with improved disease free survival (DFS) and overall survival (OS) and could be used as surrogate marker for DFS and OS in breast cancer patients. Methods: We have used a four-drug protocol in T3 and T4 TNBC patients either N+ or N- in the neo-adjuvant setting. The 15 patients enrolled in this study had a median age of 45 years. 12 patients went on to complete four planned cycles of B-CAT protocol. The chemotherapy regimen included inj. Bevacizumab 5mg/kg D1, inj. Adriamycin 50mg/m2 D1 and Docetaxel 65mg/m2 on D1. Inj. Cisplatin 60mg/m2 on D2. All patients received GCF support from D4 to D9 of each cycle. Results: Radiological assessment using ultrasound and PET-CT revealed a high percentage of responses. Radiological CR was documented in half of the patients (6/12) after four cycles. Remaining patients went on to receive 2 more cycles before undergoing radical surgery. pCR was documented in 7/12 patients and 3 more had a good partial response. The regimen was toxic and grade ¾ neutropenia was seen in 58% of patients. Four episodes of febrile neutropenia were reported and managed. Non-hematatological toxicities were common with mucositis, diarrhea, asthenia and neuropathy topping the list. Conclusion: B-CAT is a very active combination with very high pCR rates in TNBC. Toxicities though frequent, were manageable on outpatient basis. This protocol warrants further investigation.

Keywords: B-CAT:bevacizumab, cisplatin, adriamycin, taxotere, CR: complete response, pCR: pathological complete response, TNBC: triple negative breast cancer

Procedia PDF Downloads 248
3374 Effect of Cellular Water Transport on Deformation of Food Material during Drying

Authors: M. Imran Hossen Khan, M. Mahiuddin, M. A. Karim

Abstract:

Drying is a food processing technique where simultaneous heat and mass transfer take place from surface to the center of the sample. Deformation of food materials during drying is a common physical phenomenon which affects the textural quality and taste of the dried product. Most of the plant-based food materials are porous and hygroscopic in nature that contains about 80-90% water in different cellular environments: intercellular environment and intracellular environment. Transport of this cellular water has a significant effect on material deformation during drying. However, understanding of the scale of deformation is very complex due to diverse nature and structural heterogeneity of food material. Knowledge about the effect of transport of cellular water on deformation of material during drying is crucial for increasing the energy efficiency and obtaining better quality dried foods. Therefore, the primary aim of this work is to investigate the effect of intracellular water transport on material deformation during drying. In this study, apple tissue was taken for the investigation. The experiment was carried out using 1H-NMR T2 relaxometry with a conventional dryer. The experimental results are consistent with the understanding that transport of intracellular water causes cellular shrinkage associated with the anisotropic deformation of whole apple tissue. Interestingly, it is found that the deformation of apple tissue takes place at different stages of drying rather than deforming at one time. Moreover, it is found that the penetration rate of heat energy together with the pressure gradient between intracellular and intercellular environments is the responsible force to rupture the cell membrane.

Keywords: heat and mass transfer, food material, intracellular water, cell rupture, deformation

Procedia PDF Downloads 208
3373 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled

Authors: Rishabh Ambavanekar

Abstract:

Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.

Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis

Procedia PDF Downloads 106
3372 Migration in Times of Uncertainty

Authors: Harman Jaggi, David Steinsaltz, Shripad Tuljapurkar

Abstract:

Understanding the effect of fluctuations on populations is crucial in the context of increasing habitat fragmentation, climate change, and biological invasions, among others. Migration in response to environmental disturbances enables populations to escape unfavorable conditions, benefit from new environments and thereby ride out fluctuations in variable environments. Would populations disperse if there is no uncertainty? Karlin showed in 1982 that when sub-populations experience distinct but fixed growth rates at different sites, greater mixing of populations will lower the overall growth rate relative to the most favorable site. Here we ask if and when environmental variability favors migration over no-migration. Specifically, in random environments, would a small amount of migration increase the overall long-run growth rate relative to the zero migration case? We use analysis and simulations to show how long-run growth rate changes with migration rate. Our results show that when fitness (dis)advantages fluctuate over time across sites, migration may allow populations to benefit from variability. When there is one best site with highest growth rate, the effect of migration on long-run growth rate depends on the difference in expected growth between sites, scaled by the variance of the difference. When variance is large, there is a substantial probability of an inferior site experiencing higher growth rate than its average. Thus, a high variance can compensate for a difference in average growth rates between sites. Positive correlations in growth rates across sites favor less migration. With multiple sites and large fluctuations, the length of shortest cycle (excursion) from the best site (on average) matters, and we explore the interplay between excursion length, average differences between sites and the size of fluctuations. Our findings have implications for conservation biology: even when there are superior sites in a sea of poor habitats, variability and habitat quality across space may be key to determining the importance of migration.

Keywords: migration, variable-environments, random, dispersal, fluctuations, habitat-quality

Procedia PDF Downloads 127
3371 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: taxi industry, decision making, recommendation system, embedding model

Procedia PDF Downloads 126
3370 A 3D Bioprinting System for Engineering Cell-Embedded Hydrogels by Digital Light Processing

Authors: Jimmy Jiun-Ming Su, Yuan-Min Lin

Abstract:

Bioprinting has been applied to produce 3D cellular constructs for tissue engineering. Microextrusion printing is the most common used method. However, printing low viscosity bioink is a challenge for this method. Herein, we developed a new 3D printing system to fabricate cell-laden hydrogels via a DLP-based projector. The bioprinter is assembled from affordable equipment including a stepper motor, screw, LED-based DLP projector, open source computer hardware and software. The system can use low viscosity and photo-polymerized bioink to fabricate 3D tissue mimics in a layer-by-layer manner. In this study, we used gelatin methylacrylate (GelMA) as bioink for stem cell encapsulation. In order to reinforce the printed construct, surface modified hydroxyapatite has been added in the bioink. We demonstrated the silanization of hydroxyapatite could improve the crosslinking between the interface of hydroxyapatite and GelMA. The results showed that the incorporation of silanized hydroxyapatite into the bioink had an enhancing effect on the mechanical properties of printed hydrogel, in addition, the hydrogel had low cytotoxicity and promoted the differentiation of embedded human bone marrow stem cells (hBMSCs) and retinal pigment epithelium (RPE) cells. Moreover, this bioprinting system has the ability to generate microchannels inside the engineered tissues to facilitate diffusion of nutrients. We believe this 3D bioprinting system has potential to fabricate various tissues for clinical applications and regenerative medicine in the future.

Keywords: bioprinting, cell encapsulation, digital light processing, GelMA hydrogel

Procedia PDF Downloads 161
3369 Cable De-Commissioning of Legacy Accelerators at CERN

Authors: Adya Uluwita, Fernando Pedrosa, Georgi Georgiev, Christian Bernard, Raoul Masterson

Abstract:

CERN is an international organisation founded by 23 countries that provide the particle physics community with excellence in particle accelerators and other related facilities. Founded in 1954, CERN has a wide range of accelerators that allow groundbreaking science to be conducted. Accelerators bring particles to high levels of energy and make them collide with each other or with fixed targets, creating specific conditions that are of high interest to physicists. A chain of accelerators is used to ramp up the energy of particles and eventually inject them into the largest and most recent one: the Large Hadron Collider (LHC). Among this chain of machines is, for instance the Proton Synchrotron, which was started in 1959 and is still in operation. These machines, called "injectors”, keep evolving over time, as well as the related infrastructure. Massive decommissioning of obsolete cables started in 2015 at CERN in the frame of the so-called "injectors de-cabling project phase 1". Its goal was to replace aging cables and remove unused ones, freeing space for new cables necessary for upgrades and consolidation campaigns. To proceed with the de-cabling, a project co-ordination team was assembled. The start of this project led to the investigation of legacy cables throughout the organisation. The identification of cables stacked over half a century proved to be arduous. Phase 1 of the injectors de-cabling was implemented for 3 years with success after overcoming some difficulties. Phase 2, started 3 years later, focused on improving safety and structure with the introduction of a quality assurance procedure. This paper discusses the implementation of this quality assurance procedure throughout phase 2 of the project and the transition between the two phases. Over hundreds of kilometres of cable were removed in the injectors complex at CERN from 2015 to 2023.

Keywords: CERN, de-cabling, injectors, quality assurance procedure

Procedia PDF Downloads 21
3368 From Conflicts to Synergies between Mitigation and Adaptation Strategies to Climate Change: The Case of Lisbon Downtown 2010-2030

Authors: Nuno M. Pereira

Abstract:

In the last thirty years, European cities have been addressing global climate change and its local impacts by implementing mitigation and adaptation strategies. Lisbon Downtown is no exception with 10 plans under implementation since 2010 with completion scheduled for 2030 valued 1 billion euros of public investment. However, the gap between mitigation and adaptation strategies is not yet sufficiently studied alongside with its nuances- vulnerability and risk mitigation, resilience and adaptation. In Lisbon Downtown, these plans are being implemented separately, therefore compromising the effectiveness of public investment. The research reviewed the common ground of mitigation and adaptation strategies of the theoretical framework and analyzed the current urban development actions in Lisbon Downtown in order to identify potential conflicts and synergies. The empirical fieldwork supported by a sounding board of experts has been developed during two years and the results suggest that the largest public investment in Lisbon on flooding mitigation will conflict with the new Cruise ship terminal and old Downton building stock, therefore increasing risk and vulnerability factors. The study concludes that the Lisbon Downtown blue infrastructure plan should be redesigned in some areas in a trans- disciplinary and holistic approach and that the current theoretical framework on climate change should focus more on mitigation and adaptation synergies articulating the gray, blue and green infrastructures, combining old knowledge tested by resilient communities and new knowledge emerging from the digital era.

Keywords: adaptation, climate change, conflict, Lisbon Downtown, mitigation, synergy

Procedia PDF Downloads 184
3367 Analysis of Shallow Foundation Using Conventional and Finite Element Approach

Authors: Sultan Al Shafian, Mozaher Ul Kabir, Khondoker Istiak Ahmad, Masnun Abrar, Mahfuza Khanum, Hossain M. Shahin

Abstract:

For structural evaluation of shallow foundation, the modulus of subgrade reaction is one of the most widely used and accepted parameter for its ease of calculations. To determine this parameter, one of the most common field method is Plate Load test method. In this field test method, the subgrade modulus is considered for a specific location and according to its application, it is assumed that the displacement occurred in one place does not affect other adjacent locations. For this kind of assumptions, the modulus of subgrade reaction sometimes forced the engineers to overdesign the underground structure, which eventually results in increasing the cost of the construction and sometimes failure of the structure. In the present study, the settlement of a shallow foundation has been analyzed using both conventional and numerical analysis. Around 25 plate load tests were conducted on a sand fill site in Bangladesh to determine the Modulus of Subgrade reaction of ground which is later used to design a shallow foundation considering different depth. After the collection of the field data, the field condition was appropriately simulated in a finite element software. Finally results obtained from both the conventional and numerical approach has been compared. A significant difference has been observed in the case of settlement while comparing the results. A proper correlation has also been proposed at the end of this research work between the two methods of in order to provide the most efficient way to calculate the subgrade modulus of the ground for designing the shallow foundation.

Keywords: modulus of subgrade reaction, shallow foundation, finite element analysis, settlement, plate load test

Procedia PDF Downloads 169
3366 Investigations on the Influence of Web Openings on the Load Bearing Behavior of Steel Beams

Authors: Felix Eyben, Simon Schaffrath, Markus Feldmann

Abstract:

A building should maximize the potential for use through its design. Therefore, flexible use is always important when designing a steel structure. To create flexibility, steel beams with web openings are increasingly used, because these offer the advantage that cables, pipes and other technical equipment can easily be routed through without detours, allowing for more space-saving and aesthetically pleasing construction. This can also significantly reduce the height of ceiling systems. Until now, beams with web openings were not explicitly considered in the European standard. However, this is to be done with the new EN 1993-1-13, in which design rules for different opening forms are defined. In order to further develop the design concepts, beams with web openings under bending are therefore to be investigated in terms of damage mechanics as part of a German national research project aiming to optimize the verifications for steel structures based on a wider database and a validated damage prediction. For this purpose, first, fundamental factors influencing the load-bearing behavior of girders with web openings under bending load were investigated numerically without taking material damage into account. Various parameter studies were carried out for this purpose. For example, the factors under study were the opening shape, size and position as well as structural aspects as the span length, arrangement of stiffeners and loading situation. The load-bearing behavior is evaluated using resulting load-deformation curves. These results are compared with the design rules and critically analyzed. Experimental tests are also planned based on these results. Moreover, the implementation of damage mechanics in the form of the modified Bai-Wierzbicki model was examined. After the experimental tests will have been carried out, the numerical models are validated and further influencing factors will be investigated on the basis of parametric studies.

Keywords: damage mechanics, finite element, steel structures, web openings

Procedia PDF Downloads 153
3365 The Association between Masculinity and Anxiety in Canadian Men

Authors: Nikk Leavitt, Peter Kellett, Cheryl Currie, Richard Larouche

Abstract:

Background: Masculinity has been associated with poor mental health outcomes in adult men and is colloquially referred to as toxic. Masculinity is traditionally measured using the Male Role Norms Inventory, which examines behaviors that may be common in men but that are themselves associated with poor mental health regardless of gender (e.g., aggressiveness). The purpose of this study was to examine if masculinity is associated with generalized anxiety among men using this inventory vs. a man’s personal definition of it. Method: An online survey collected data from 1,200 men aged 18-65 across Canada in July 2022. Masculinity was measured using: 1) the Male Role Norms Inventory Short Form and 2) by asking men to self-define what being masculine means. Men were then asked to rate the extent they perceived themselves to be masculine on a scale of 1 to 10 based on their definition of the construct. Generalized anxiety disorder was measured using the GAD-7. Multiple linear regression was used to examine associations between each masculinity score and anxiety score, adjusting for confounders. Results: The masculinity score measured using the inventory was positively associated with increased anxiety scores among men (β = 0.02, p < 0.01). Masculinity subscales most strongly correlated with higher anxiety were restrictive emotionality (β = 0.29, p < 0.01) and dominance (β = 0.30, p < 0.01). When traditional masculinity was replaced by a man’s self-rated masculinity score in the model, the reverse association was found, with increasing masculinity resulting in a significantly reduced anxiety score (β = -0.13, p = 0.04). Discussion: These findings highlight the need to revisit the ways in which masculinity is defined and operationalized in research to better understand its impacts on men’s mental health. The findings also highlight the importance of allowing participants to self-define gender-based constructs, given they are fluid and socially constructed.

Keywords: masculinity, generalized anxiety disorder, race, intersectionality

Procedia PDF Downloads 54
3364 Classifier for Liver Ultrasound Images

Authors: Soumya Sajjan

Abstract:

Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.

Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix

Procedia PDF Downloads 393
3363 Experimental Study on Performance of a Planar Membrane Humidifier for a Proton Exchange Membrane Fuel Cell Stack

Authors: Chen-Yu Chen, Wei-Mon Yan, Chi-Nan Lai, Jian-Hao Su

Abstract:

The proton exchange membrane fuel cell (PEMFC) becomes more important as an alternative energy source recently. Maintaining proper water content in the membrane is one of the key requirements for optimizing the PEMFC performance. The planar membrane humidifier has the advantages of simple structure, low cost, low-pressure drop, light weight, reliable performance and good gas separability. Thus, it is a common external humidifier for PEMFCs. In this work, a planar membrane humidifier for kW-scale PEMFCs is developed successfully. The heat and mass transfer of humidifier is discussed, and its performance is analyzed in term of dew point approach temperature (DPAT), water vapor transfer rate (WVTR) and water recovery ratio (WRR). The DPAT of the humidifier with the counter flow approach reaches about 6°C under inlet dry air of 50°C and 60% RH and inlet humid air of 70°C and 100% RH. The rate of pressure loss of the humidifier is 5.0×10² Pa/min at the torque of 7 N-m, which reaches the standard of commercial planar membrane humidifiers. From the tests, it is found that increasing the air flow rate increases the WVTR. However, the DPAT and the WRR are not improved by increasing the WVTR as the air flow rate is higher than the optimal value. In addition, increasing the inlet temperature or the humidity of dry air decreases the WVTR and the WRR. Nevertheless, the DPAT is improved at elevated inlet temperatures or humidities of dry air. Furthermore, the performance of the humidifier with the counter flow approach is better than that with the parallel flow approach. The DPAT difference between the two flow approaches reaches up to 8 °C.

Keywords: heat and mass transfer, humidifier performance, PEM fuel cell, planar membrane humidifier

Procedia PDF Downloads 293
3362 Spatial Indeterminacy: Destabilization of Dichotomies in Modern and Contemporary Architecture

Authors: Adrian Lo

Abstract:

Since the beginning of modern architecture, ideas of free plan and transparency have proliferated well into current trends of building design, from houses to highrise office buildings. The movement’s notion of a spatially homogeneous, open, and limitless ‘free plan’ stands opposite to the spatially heterogeneous ‘separation of rooms’ defined by load-bearing walls, which in turn triggered new notions of transparency achieved by vast expanses of glazed walls. Similarly, transparency was also dichotomized as something that was physical or optical, as well as something conceptual, akin to spatial organization. As opposed to merely accepting the duality and possible incompatibility of these dichotomies, this paper seeks to ask how can space be both literally and phenomenally transparent, as well as display both homogeneous and heterogeneous qualities? This paper explores this potential destabilization or blurring of spatial phenomena by dissecting the transparent layers and volumes of a series of selected case studies to investigate how different architects have devised strategies of spatial ambivalence, ambiguity, and interpenetration. Projects by Peter Eisenman, Sou Fujimoto, and SANAA will be discussed and analyzed to show how the superimposition of geometries and spaces achieve different conditions of layering, transparency, and interstitiality. Their particular buildings will be explored to reveal various innovative kinds of spatial interpenetration produced through the articulate relations of the elements of architecture, which challenge conventional perceptions of interior and exterior whereby visual homogeneity blurs with spatial heterogeneity. The results show how spatial conceptions such as interpenetration and transparency have the ability to subvert not only inside-outside dialectics but could also produce multiple degrees of interiority within complex and indeterminate spatial dimensions in constant flux as well as present alternative forms of social interaction.

Keywords: interpenetration, literal and phenomenal transparency, spatial heterogeneity, visual homogeneity

Procedia PDF Downloads 162
3361 Presuppositions and Implicatures in Four Selected Speeches of Osama Bin Laden's Legitimisation of 'Jihad'

Authors: Sawsan Al-Saaidi, Ghayth K. Shaker Al-Shaibani

Abstract:

This paper investigates certain linguistics properties of four selected speeches by Al-Qaeda’s former leader Osama bin Laden who legitimated the use of jihad by Muslims in various countries when he was alive. The researchers adopt van Dijk’s (2009; 1998) Socio-Cognitive approach and Ideological Square theory respectively. Socio-Cognitive approach revolves around various cognitive, socio-political, and discursive aspects that can be found in political discourse as in Osama bin Laden’s one. The political discourse can be defined in terms of textual properties and contextual models. Pertaining to the ideological square, it refers to positive self-presentation and negative other-presentation which help to enhance the textual and contextual analyses. Therefore, among the most significant properties in Osama bin Laden’s discourse are the use of presuppositions and implicatures which are based on background knowledge and contextual models as well. Thus, the paper concludes that Osama bin Laden used a number of manipulative strategies which augmented and embellished the use of ‘jihad’ in order to develop a more effective discourse for his audience. In addition, the findings have revealed that bin Laden used different implicit and embedded interpretations of different topics which have been accepted as taken-for-granted truths for him to legitimate Jihad against his enemies. There are many presuppositions in the speeches analysed that result in particular common-sense assumptions and a world-view about the selected speeches. More importantly, the assumptions in the analysed speeches help consolidate the ideological analysis in terms of in-group and out-group members.

Keywords: Al-Qaeda, cognition, critical discourse analysis, Osama Bin Laden, jihad, implicature, legitimisation, presupposition, political discourse

Procedia PDF Downloads 223
3360 Investigation of Chronic Drug Use Due to Chronic Diseases in Patients Admitted to Emergency Department

Authors: Behcet Al, Şener Cindoruk, Suat Zengin, Mehmet Murat Oktay, Mehmet Mustafa Sunar, Hatice Eroglu, Cuma Yildirim

Abstract:

Objective: In present study we aimed to investigate the chronic drug use due to chronic diseases in patients admitted to emergency department. Materials-Methods: 144 patients who applied to emergency department (ED) of medicine school of Gaziantep University between June 2013 and September 2013 with chronic diseases and use chronic drugs were included. Information about drugs used by patients were recorded. Results: Of patients, half were male, half were female, and the mean age was 58 years. The first three common diseases were diabetes mellitus, hypertension and coronary artery diseases. Of patients, %79.2 knew their illness. Fifty patients began to use drug within three months, 36 patient began to use within the last one year. While 42 patients brought all of their drugs with themselves, 17 patients brought along a portion of drugs. While three patients stopped their medication completely, 125 patients received medication on a regular basis. Fifty-two patient described the drugs with names, 13 patients described with their colors, 3 patients described by grammes, 45 patients described with the size of the tablet and 13 patients could not describe the drugs. Ninety-two patients explained which kind of drugs were used for each diseases, 17 patient explained partly, and 35 patients had no idea. Hundred patients received medication by themselves, 44 patients medications were giving by their relatives and med carers. Of medications, 140 were written by doctors directly, three medication were given by pharmacist; and one patient bought the drug by himself. For 11 patients the drugs were not harmonious to their diseases. Fifty-one patients admitted to the ED two times within last week, and 73 admitted two times within last month. Conclusion: The majority of patients with chronic diseases and use chronic drugs know their diseases and use the drugs in order, but do not have enough information about their medication.

Keywords: chronic disease, drug use, emergency department, medication

Procedia PDF Downloads 446
3359 Prevalence of Extended Spectrum of Beta Lactamase Producers among Gram Negative Uropathogens

Authors: Y. V. S. Annapurna, V. V. Lakshmi

Abstract:

Urinary tract infection (UTI) is one of the most common infectious diseases at the community level with a high rate of morbidity . This is further augmented by increase in the number of resistant and multi resistant strains of bacteria particularly by those producing Extended spectrum of beta lactamases. The present study was aimed at analysis of antibiograms of E.coli and Klebsiella sp causing urinary tract infections. Between November 2011 and April 2013, a total of 1120 urine samples were analyzed,. Antibiotic sensitivity testing was done with 542(48%) isolates of E.coli and 446(39%) of Klebsiella sp using the standard disc diffusion method against eleven commonly used antibiotics .Organisms showed high susceptibility to Amikacin and Netilimicin and low susceptibility to Cephalosporins. MAR index was calculated for the multidrug resistant strains. Maximum MAR index detected among the isolates was 0.9. Phenotypic identification for ESBL production was confirmed by double disk synergy test (DDST) according to CLSI guidelines. Plasmid profile of the isolates was carried out using alkaline hydrolysis method. Agarose-gel electrophoresis showed presence of high-molecular weight plasmid DNA among the ESBL strains. This study emphasizes the importance of indiscriminate use of antibiotics which if discontinued, in turn would prevent further development of bacterial drug resistance. For this, a proper knowledge of susceptibility pattern of uropathogens is necessary before prescribing empirical antibiotic therapy and it should be made mandatory.

Keywords: escherichia coli, extended spectrum of beta lactamase, Klebsiella spp, Uropathogens

Procedia PDF Downloads 348
3358 Constitutional Identity: The Connection between National Constitutions and EU Law

Authors: Norbert Tribl

Abstract:

European contemporary scientific public opinion considers the concept of constitutional identity as a highlighted issue. Some scholars interpret the matter as the manifestation of a conflict of Europe. Nevertheless, constitutional identity is a bridge between the Member States and the EU rather than a river that will wash away the achievements of the integration. In accordance with the opinion of the author, the main problem of constitutional identity in Europe is the undetermined nature: the exact concept of constitutional identity has not been defined until now. However, this should be the first step to understand and use identity as a legal institution. Having regard to this undetermined nature, the legal-theoretical examination of constitutional identity is the main purpose of this study. The concept of constitutional identity appears in the Anglo-Saxon legal systems by a different approach than in the supranational system of European Integration. While the interpretation of legal institutions in conformity with the constitution is understood under it, the European concept is applied when possible conflicts arise between the legal system of the European supranational space and certain provisions of the national constitutions of the member states. The European concept of constitutional identity intends to offer input in determining the nature of the relationship between the constitutional provisions of the member states and the legal acts of the EU integration. In the EU system of multilevel constitutionalism, a long-standing central debate on integration surrounds the conflict between EU legal acts and the constitutional provisions of the member states. In spite of the fact that the Court of Justice of the European Union stated in Costa v. E.N.E.L. that the member states cannot refer to the provisions of their respective national constitutions against the integration. Based on the experience of more than 50 years since the above decision, and also in light of the Treaty of Lisbon, we now can clearly see that EU law has itself identified an obligation for the EU to protect the fundamental constitutional features of the Member States under Article 4 (2) of Treaty on European Union, by respecting the national identities of member states. In other words, the European concept intends to offer input for the determination of the nature of the relationship between the constitutional provisions of the member states and the legal acts of the EU integration.

Keywords: constitutional identity, EU law, European Integration, supranationalism

Procedia PDF Downloads 136
3357 Pattern of Bacterial Isolates and Antimicrobial Resistance at Ayder Comprehensive Specialized Referral Hospital in Northern Ethiopia: A Retrospective Study

Authors: Solomon Gebremariam, Mulugeta Naizigi, Aregawi Haileselassie

Abstract:

Background: Knowledge of the pattern of bacterial isolates and their antimicrobial susceptibility is crucial for guiding empirical treatment and infection prevention and control measures. Objective: The aim of this study was to analyze the pattern of bacterial isolates and their susceptibility patterns from various specimens. Methods: Retrospectively, a total of 1067 microbiological culture results that were isolated, characterized, and identified by standard microbiological methods and whose antibiotic susceptibility was determined using CLSI guidelines between 2017 and 2019 were retrieved and analyzed. Data were entered and analyzed using the Stata release 10.1 statistical package. Result: The positivity rate of culture was 26.04% (419/1609). The most common bacteria isolated were S. aureus 23.8% (94), E. coli 15.1% (60), Klebsiella pneumonia 14.1% (56), Pseudomonas aeruginosa 8.5% (34), and CONS 7.3% (29). S. aureus and CONS showed a high (58.1% - 96.2%) rate of resistance to most antibiotics tested. They were less resistant to Vancomycin which is 18.6% (13/70) and 11.8% (2/17), respectively. Similarly, the resistance of E. coli, Klebsella pneumonia, and Pseudomonas aeruginosa was high (69.4% - 100%) to most antibiotics. They were less resistant to Ciprofloxacilin, which is 41.1% (23/56), 19.2% (10/52), and 16.1% (5/31), respectively. Conclusion: This study has shown that there is a high rate of antibiotic resistance among bacterial isolates in this hospital. A combination of Vancomycin and Ciprofloxacin should be considered in the choice of antibiotics for empirical treatment of suspected infections due to S. aureus, CONS, E. coli, Klebsiella pneumonia, Pseudomonas such as in infections within hospital setup.

Keywords: antimicrobial, resistance, bacteria, hospital

Procedia PDF Downloads 53
3356 Investigation of the Density and Control Methods of Weed Species That Are a Problem in Broad Bean (Vicia Faba L.) Cultivation

Authors: Tamer Üstüner, Sena Nur Arı

Abstract:

This study was carried out at Kahramanmaras Sutcu Imam University, trial area Faculty of Agriculture and ÜSKİM laboratory in 2022. Many problems are encountered in broad bean (Vicia faba L.) cultivation. One of these problems is weeds. In this study, weed species, families, and densities of weeds that are a problem in broad beans were determined. A total of 47 weed species belonging to 20 different families were determined in the experimental area. Weed species found very densely in control 1 plots of the broad bean experimental area were Sinapis arvensis 11.50 pieces/m², Lolium temulentum L. 11.20, Ranunculus arvensis L. 10.95, Galium tricornutum Dany. 10.81, Avena sterilis 10.60, Bupleurum lancifolium 10.40, Convolvulus arvensis 10.25 ve Cynodon dactylon 10.14 pieces/m². The weed species Cuscuta campestris Yunck. which is very common in the control plots of the broad bean experimental area, was calculated as 11.94 units/m². It was determined that C. campestris alone caused significant yield and quality loss in broad beans. In this study, it was determined that the most effective method in reducing the weed population was hand hoeing, followed by pre-emergence pendimethalin and post-emergence herbicide with Imazamox active substance. In terms of the effect of these control applications on the pod yield, the hand hoeing application ranked first, the pendimethalin application ranked second, the Imazamox application ranked third, and the control 2 and control 1 plot took the last place.

Keywords: broad bean, weed, struggle, yield

Procedia PDF Downloads 73
3355 Impacts of Urbanization on Forest and Agriculture Areas in Savannakhet Province, Lao People's Democratic Republic

Authors: Chittana Phompila

Abstract:

The current increased population pushes increasing demands for natural resources and living space. In Laos, urban areas have been expanding rapidly in recent years. The rapid urbanization can have negative impacts on landscapes, including forest and agriculture lands. The primary objective of this research were to map current urban areas in a large city in Savannakhet province, in Laos, 2) to compare changes in urbanization between 1990 and 2018, and 3) to estimate forest and agriculture areas lost due to expansions of urban areas during the last over twenty years within study area. Landsat 8 data was used and existing GIS data was collected including spatial data on rivers, lakes, roads, vegetated areas and other land use/land covers). GIS data was obtained from the government sectors. Object based classification (OBC) approach was applied in ECognition for image processing and analysis of urban area using. Historical data from other Landsat instruments (Landsat 5 and 7) were used to allow us comparing changes in urbanization in 1990, 2000, 2010 and 2018 in this study area. Only three main land cover classes were focused and classified, namely forest, agriculture and urban areas. Change detection approach was applied to illustrate changes in built-up areas in these periods. Our study shows that the overall accuracy of map was 95% assessed, kappa~ 0.8. It is found that that there is an ineffective control over forest and land-use conversions from forests and agriculture to urban areas in many main cities across the province. A large area of agriculture and forest has been decreased due to this conversion. Uncontrolled urban expansion and inappropriate land use planning can lead to creating a pressure in our resource utilisation. As consequence, it can lead to food insecurity and national economic downturn in a long term.

Keywords: urbanisation, forest cover, agriculture areas, Landsat 8 imagery

Procedia PDF Downloads 148
3354 An Overview of Paclitaxel as an Anti-Cancer Agent in Avoiding Malignant Metastatic Cancer Therapy

Authors: Nasrin Hosseinzad, Ramin Ghasemi Shayan

Abstract:

Chemotherapy is the most common procedure in the treatment of advanced cancers but is justsoberlyoperativeand toxic. Nevertheless, the efficiency of chemotherapy is restrictedowing to multiple drug resistance(MDR). Lately, plentiful preclinical experiments have revealedthatPaclitaxel-Curcumin could be an ultimateapproach to converse MDR and synergistically increase their efficiency. The connotationsamongst B-cell-lymphoma2(BCL-2) and multi-drug-resistance-associated-P-glycoprotein(MDR1) consequence of patients forecast the efficiency of paclitaxel-built chemoradiotherapy. There are evidences of the efficacy of paclitaxel in the treatment of surface-transmission of bladder-cell-carcinoma by manipulating bio-adhesive microspheres accomplishedthroughout measured release of drug at urine epithelium. In Genetically-Modified method, muco-adhesive oily constructionoftricaprylin, Tween 80, and paclitaxel group showed slighter toxicity than control in therapeutic dose. Postoperative chemotherapy-Paclitaxel might be more advantageous for survival than adjuvant chemo-radio-therapy, and coulddiminish postoperative complications in cervical cancer patients underwent a radical hysterectomy.HA-Se-PTX(Hyaluronic acid, Selenium, Paclitaxel) nanoparticles could observablyconstrain the proliferation, transmission, and invasion of metastatic cells and apoptosis. Furthermore, they exhibitedvast in vivo anti-tumor effect. Additionally, HA-Se-PTX displayedminor toxicity on mice-chef-organs. Briefly, HA-Se-PTX mightprogress into a respectednano-scale agentinrespiratory cancers. To sum up, Paclitaxel is considered a profitable anti-cancer drug in the treatment and anti-progress symptoms in malignant cancers.

Keywords: cancer, paclitaxel, chemotherapy, tumor

Procedia PDF Downloads 117
3353 Seroepidemiology of Q Fever among Companion Dogs in Fars Province, South of Iran

Authors: Atefeh Esmailnejad, Mohammad Abbaszadeh Hasiri

Abstract:

Coxiella burnetii is a gram-negative obligatory intracellular bacterium that causes Q fever, a significant zoonotic disease. Sheep, cattle, and goats are the most commonly reported reservoirs for the bacteria, but infected cats and dogs have also been implicated in the transmission of the disease to human. The aim of present study was to investigate the presence of antibodies against Coxiella burnetii among companion dogs in Fars province, South of Iran. A total of 181 blood samples were collected from asymptomatic dogs, mostly referred to Veterinary Hospital of Shiraz University for regular vaccination. The IgG antibody detection against Coxiella burnetii was made by indirect Enzyme-linked Immunosorbent Assay (ELISA), employing phase I and II Coxiella burnetii antigens. A logistic regression model was developed to analyze multiple risk factors associated with seropositivity. An overall seropositivity of 7.7% (n=14) was observed. Prevalence was significantly higher in adult dogs above five years (18.18 %) compared with dogs between 1 and five years (7.86 %) and less than one year (6.17%) (P=0.043). Prevalence was also higher in male dogs (11.21 %) than in female (2.7 %) (P=0.035). There were no significant differences in the prevalence of positive cases and breed, type of housing, type of food and exposure to other farm animals (P>0.05). The results of this study showed the presence of Coxiella burnetii infection among the companion dogs population in Fars province. To our knowledge, this is the first study regarding Q fever in dogs carried out in Iran. In areas like Iran, where human cases of Q fever are not common or remain unreported, the public health implications of Q fever seroprevalence in dogs are quite significant.

Keywords: Coxiella burnetii, dog, Iran, Q fever

Procedia PDF Downloads 291
3352 A Model for Academic Coaching for Success and Inclusive Excellence in Science, Technology, Engineering, and Mathematics Education

Authors: Sylvanus N. Wosu

Abstract:

Research shows that factors, such as low motivation, preparation, resources, emotional and social integration, and fears of risk-taking, are the most common barriers to access, matriculation, and retention into science, technology, engineering, and mathematics (STEM) disciplines for underrepresented (URM) students. These factors have been shown to impact students’ attraction and success in STEM fields. Standardized tests such as the SAT and ACT often used as predictor of success, are not always true predictors of success for African and Hispanic American students. Without an adequate academic support environment, even a high SAT score does not guarantee academic success in science and engineering. This paper proposes a model for Academic Coaching for building success and inclusive excellence in STEM education. Academic coaching is framed as a process of motivating students to be independent learners through relational mentorship, facilitating learning supports inside and outside of the classroom or school environment, and developing problem-solving skills and success attitudes that lead to higher performance in the specific subjects. The model is formulated based on best strategies and practices for enriching Academic Performance Impact skills and motivating students’ interests in STEM. A scaled model for measuring the Academic Performance Impact (API) index and STEM is discussed. The study correlates API with state standardized test and shows that the average impact of those skills can be predicted by the Academic Performance Impact (API) index or Academic Preparedness Index.

Keywords: diversity, equity, graduate education, inclusion, inclusive excellence, model

Procedia PDF Downloads 176
3351 Potential Drug-Drug Interactions at a Referral Hematology-Oncology Ward in Iran: A Cross-Sectional Study

Authors: Sara Ataei, Molouk Hadjibabaie, Shirinsadat Badri, Amirhossein Moslehi, Iman Karimzadeh, Ardeshir Ghavamzadeh

Abstract:

Purpose: To assess the pattern and probable risk factors for moderate and major drug–drug interactions in a referral hematology-oncology ward in Iran. Methods: All patients admitted to hematology–oncology ward of Dr. Shariati Hospital during a 6-month period and received at least two anti-cancer or non-anti-cancer medications simultaneously were included. All being scheduled anti-cancer and non-anti-cancer medications both prescribed and administered during ward stay were considered for drug–drug interaction screening by Lexi-Interact On- Desktop software. Results: One hundred and eighty-five drug–drug interactions with moderate or major severity were detected from 83 patients. Most of drug–drug interactions (69.73 %) were classified as pharmacokinetics. Fluconazole (25.95 %) was the most commonly offending medication in drug–drug interactions. Interaction of sulfamethoxazole-trimethoprim with fluconazole was the most common drug–drug interaction (27.27 %). Vincristine with imatinib was the only identified interaction between two anti-cancer agents. The number of administered medications during ward stay was considered as an independent risk factor for developing a drug–drug interaction. Conclusions: Potential moderate or major drug–drug interactions occur frequently in patients with hematological malignancies or related diseases. Performing larger standard studies are required to assess the real clinical and economical effects of drug–drug interactions on patients with hematological and non-hematological malignancies.

Keywords: drug–drug interactions, hematology–oncology ward, hematological malignancies

Procedia PDF Downloads 432