Search results for: big health data
1754 Modeling of the Fermentation Process of Enzymatically Extracted Annona muricata L. Juice
Authors: Calister Wingang Makebe, Wilson Agwanande Ambindei, Zangue Steve Carly Desobgo, Abraham Billu, Emmanuel Jong Nso, P. Nisha
Abstract:
Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1, as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics
Procedia PDF Downloads 701753 Indigenous Companies in Nigeria's Oil Sector: Stages, Opportunities, and Obstacles regarding Corporate Social Responsibility
Authors: L. U. Dumuje, R. Leite
Abstract:
There is an ongoing debate in terms of corporate social responsibility (CSR) initiative in Niger Delta, Nigeria, that originates from existing gap between stated objective of organizations in the Nigerian oil sector and their main activities that threaten the society. CSR in developing countries is becoming popular, and to contribute to scientific knowledge, we need to research on CSR practices and discourse in indigenous Nigeria that is scarce. Despite governments mandate in terms of unofficial blazing, methane gas is released into the air around refinery area which contributes to global warming. There is a need to understand if this practice applies to indigenous oil companies in Nigeria. To get a better understanding of CSR among indigenous oil companies in Nigeria, our study focuses on discourse and rhetoric regarding CSR. This current paper contributions is twofold: on the one hand, it aims to better understand practitioner’s rationale and fundamentals of CSR in Nigerian oil companies. On the other hand, it intends to identify the stages of CSR initiatives, advantages and difficulties of CSR implementation in indigenous Nigeria oil sector. This current paper uses the qualitative research as a methodological strategy. Instrument for data collection is semi-structured interview. Besides 28 interviews, we conduct five focus group discussions with stakeholders. Participant for this study consist of: employees, managers and executives of indigenous oil companies in Nigeria. It is relevant to mention, key informants as government institution, environmental organization and community leader/member are part of our sample. It is important that despite significant findings in some studies, there are still some gaps. To help filling this existing gaps, we have formulated some research questions, as follows: ‘What are the stages, opportunities and obstacles of having corporate social responsibility practice in indigenous oil companies in Nigeria’. This ongoing research sub-questions as follows: What are the CSR discourses and practices among indigenous companies in the Nigerian oil sector; what is the actual status regarding CSR development; what are the main perceptions of opportunities and obstacles with regard to CSR in indigenous Nigerian oil companies; who are the main stakeholders of indigenous Nigerian oil companies and their different meanings and understandings of CSR practices. Regarding the above questions, the following objectives have been determined: first, we conduct a literature review with the aim of understanding and identifying importance of CSR practises in western and developing countries. Second, this current paper identify specific characteristics of the national context in terms of CSR engagement in Nigeria, so we perform empirical research with relevant stakeholder in indigenous Nigerian, as well as key informants, in order to identify development of CSR and different perception of this praised initiative, CSR.Keywords: corporate social responsibility, indigenous, oil organizations, Nigeria, practice
Procedia PDF Downloads 1391752 Single Cell Oil of Oleaginous Fungi from Lebanese Habitats as a Potential Feed Stock for Biodiesel
Authors: M. El-haj, Z. Olama, H. Holail
Abstract:
Single cell oils (SCOs) accumulated by oleaginous fungi have emerged as a potential alternative feedstock for biodiesel production. Five fungal strains were isolated from the Lebanese environment namely Fusarium oxysporum, Mucor hiemalis, Penicillium citrinum, Aspergillus tamari, and Aspergillus niger that have been selected among 39 oleaginous strains for their potential ability to accumulate lipids (lipid content was more than 40% on dry weight basis). Wide variations were recorded in the environmental factors that lead to maximum lipid production by fungi under test and were cultivated under submerged fermentation on medium containing glucose as a carbon source. The maximum lipid production was attained within 6-8 days, at pH range 6-7, 24 to 48 hours age of seed culture, 4 to 6.107 spores/ml inoculum level and 100 ml culture volume. Eleven culture conditions were examined for their significance on lipid production using Plackett-Burman factorial design. Reducing sugars and nitrogen source were the most significant factors affecting lipid production process. Maximum lipid yield was noticed with 15.62, 14.48, 12.75, 13.68 and 20.41g/l for Fusarium oxysporum, Mucor hiemalis, Penicillium citrinum, Aspergillus tamari, and Aspergillus niger respectively. A verification experiment was carried out to examine model validation and revealed more than 94% validity. The profile of extracted lipids from each fungal isolate was studied using thin layer chromatography (TLC) indicating the presence of monoacylglycerols, diaacylglycerols, free fatty acids, triacylglycerols and sterol esters. The fatty acids profiles were also determined by gas-chromatography coupled with flame ionization detector (GC-FID). Data revealed the presence of significant amount of oleic acid (29-36%), palmitic acid (18-24%), linoleic acid (26.8-35%), and low amount of other fatty acids in the extracted fungal oils which indicate that the fatty acid profiles were quite similar to that of conventional vegetable oil. The cost of lipid production could be further reduced with acid-pretreated lignocellulotic corncob waste, whey and date molasses to be utilized as the raw material for the oleaginous fungi. The results showed that the microbial lipid from the studied fungi was a potential alternative resource for biodiesel production.Keywords: agro-industrial waste products, biodiesel, fatty acid, single cell oil, Lebanese environment, oleaginous fungi
Procedia PDF Downloads 4111751 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 4561750 Monitoring the Pollution Status of the Goan Coast Using Genotoxicity Biomarkers in the Bivalve, Meretrix ovum
Authors: Avelyno D'Costa, S. K. Shyama, M. K. Praveen Kumar
Abstract:
The coast of Goa, India receives constant anthropogenic stress through its major rivers which carry mining rejects of iron and manganese ores from upstream mining sites and petroleum hydrocarbons from shipping and harbor-related activities which put the aquatic fauna such as bivalves at risk. The present study reports the pollution status of the Goan coast by the above xenobiotics employing genotoxicity studies. This is further supplemented by the quantification of total petroleum hydrocarbons (TPHs) and various trace metals (iron, manganese, copper, cadmium, and lead) in gills of the estuarine clam, Meretrix ovum as well as from the surrounding water and sediment, over a two-year sampling period, from January 2013 to December 2014. Bivalves were collected from a probable unpolluted site at Palolem and a probable polluted site at Vasco, based upon the anthropogenic activities at these sites. Genotoxicity was assessed in the gill cells using the comet assay and micronucleus test. The quantity of TPHs and trace metals present in gill tissue, water and sediments were analyzed using spectrofluorometry and atomic absorption spectrophotometry (AAS), respectively. The statistical significance of data was analyzed employing Student’s t-test. The relationship between DNA damage and pollutant concentrations was evaluated using multiple regression analysis. Significant DNA damage was observed in the bivalves collected from Vasco which is a region of high industrial activity. Concentrations of TPHs and trace metals (iron, manganese, and cadmium) were also found to be significantly high in gills of the bivalves collected from Vasco compared to those collected from Palolem. Further, the concentrations of these pollutants were also found to be significantly high in the water and sediments at Vasco compared to that of Palolem. This may be due to the lack of industrial activity at Palolem. A high positive correlation was observed between the pollutant levels and DNA damage in the bivalves collected from Vasco suggesting the genotoxic nature of these pollutants. Further, M. ovum can be used as a bioindicator species for monitoring the level of pollution of the estuarine/coastal regions by TPHs and trace metals.Keywords: comet assay, metals, micronucleus test, total petroleum Hydrocarbons
Procedia PDF Downloads 2401749 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter
Authors: Zhu Xinxin, Wang Hui, Yang Kai
Abstract:
Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter
Procedia PDF Downloads 1241748 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One
Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva
Abstract:
Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.Keywords: DFT, MEP, NLO, vibrational spectra
Procedia PDF Downloads 2231747 The Impact of Coronal STIR Imaging in Routine Lumbar MRI: Uncovering Hidden Causes to Enhanced Diagnostic Yield of Back Pain and Sciatica
Authors: Maysoon Nasser Samhan, Somaya Alkiswani, Abdullah Alzibdeh
Abstract:
Background: Routine lumbar MRIs for back pain may yield normal results despite persistent symptoms, which means the possibility of other causes for this pain, which was not shown on the routine images. Research suggests including coronal STIR imaging to detect additional pathologies like sacroiliitis. Objectives: This study aims to enhance diagnostic accuracy and aid in determining treatment processes for patients with persistent back pain who have normal routine lumbar MRI (T1 and T2 images) by incorporating coronal STIR into the examination. Methods: A prospectively conducted study involving 274 patients, 115 males and 159 females, with an age range of 6–92 years, reviewed their medical records and imaging data following a lumbar spine MRI. This study included patients with back pain and sciatica as their primary complaints, all of whom underwent lumbar spine MRIs at our hospital to identify potential pathologies. Using a GE Signa HD 1.5T MRI System, each patient received a standard MRI protocol that included T1 and T2 sagittal and axial sequences, as well as a coronal STIR sequence. We collected relevant MRI findings, including abnormalities and structural variations, from radiology reports. We classified these findings into tables and documented them as counts and percentages, using Fisher’s exact test to assess differences between categorical variables. We conducted a statistical analysis using Prism GraphPad software version 10.1.2. The study adhered to ethical guidelines, institutional review board approvals, and patient confidentiality regulations. Results: Exclusion of the coronal STIR sequence led to 83 subjects (30.29%) being classified as within normal limits on MRI examination. 36 patients without abnormalities on T1 and T2 sequences showed abnormalities on the coronal STIR sequence, with 26 cases attributed to spinal pathologies and 10 to non-spinal pathologies. In addition to that, Fisher's exact test demonstrated a significant association between sacroiliitis diagnosis and abnormalities identified solely through the coronal STIR sequence (P < 0.0001). Conclusion: Implementing coronal STIR imaging as part of routine lumbar MRI protocols has the potential to improve patient care by facilitating a more comprehensive evaluation and management of persistent back pain.Keywords: magnetic resonance imaging, lumber MRI, radiology, neurology
Procedia PDF Downloads 201746 Histological Grade Concordance between Core Needle Biopsy and Corresponding Surgical Specimen in Breast Carcinoma
Authors: J. Szpor, K. Witczak, M. Storman, A. Orchel, D. Hodorowicz-Zaniewska, K. Okoń, A. Klimkowska
Abstract:
Core needle biopsy (CNB) is well established as an important diagnostic tool in diagnosing breast cancer and it is now considered the initial method of choice for diagnosing breast disease. In comparison to fine needle aspiration (FNA), CNB provides more architectural information allowing for the evaluation of prognostic and predictive factors for breast cancer, including histological grade—one of three prognostic factors used to calculate the Nottingham Prognostic Index. Several studies have previously described the concordance rate between CNB and surgical excision specimen in determination of histological grade (HG). The concordance rate previously ascribed to overall grade varies widely across literature, ranging from 59-91%. The aim of this study is to see how the data looks like in material at authors’ institution and are the results as compared to those described in previous literature. The study population included 157 women with a breast tumor who underwent a core needle biopsy for breast carcinoma and a subsequent surgical excision of the tumor. Both materials were evaluated for the determination of histological grade (scale from 1 to 3). HG was assessed only in core needle biopsies containing at least 10 well preserved HPF with invasive tumor. The degree of concordance between CNB and surgical excision specimen for the determination of tumor grade was assessed by Cohen’s kappa coefficient. The level of agreement between core needle biopsy and surgical resection specimen for overall histologic grading was 73% (113 of 155 cases). CNB correctly predicted the grade of the surgical excision specimen in 21 cases for grade 1 tumors (Kappa coefficient κ = 0.525 95% CI (0.3634; 0.6818), 52 cases for grade 2 (Kappa coefficient κ = 0.5652 95% CI (0.458; 0.667) and 40 cases for stage 3 tumors (Kappa coefficient κ = 0.6154 95% CI (0.4862; 0.7309). The highest level of agreement was observed in grade 3 malignancies. In 9 of 42 (21%) discordant cases, the grade was higher in the CNB than in the surgical excision. This composed 6% of the overall discordance. These results correspond to the noted in the literature, showing that underestimation occurs more frequently than overestimation. This study shows that authors’ institution’s histologic grading of CNBs and surgical excisions shows a fairly good correlation and is consistent with findings in previous reports. Despite the inevitable limitations of CNB, CNB is an effective method for diagnosing breast cancer and managing treatment options. Assessment of tumour grade by CNB is useful for the planning of treatment, so in authors’ opinion it is worthy to implement it in daily practice.Keywords: breast cancer, concordance, core needle biopsy, histological grade
Procedia PDF Downloads 2331745 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 761744 Students with Severe Learning Disabilities in Mainstream Classes: A Study of Comprehensions amongst School Staff and Parents Built on Observations and Interviews in a Phenomenological Framework
Authors: Inger Eriksson, Lisbeth Ohlsson, Jeremias Rosenqvist
Abstract:
Ingress: Focus in the study is directed towards phenomena and concepts of segregation, integration, and inclusion of students attending a special school form in Sweden, namely compulsory school for pupils with learning disabilities (in Swedish 'särskola') as an alternative to mainstream compulsory school. Aim: The aim of the study is to examine the school situation for students attending särskola from a historical perspective focussing the 1980s, 1990s and the 21st century, from an integration perspective, and from a perspective of power. Procedure: Five sub-studies are reported, where integration and inclusion are looked into by observation studies and interviews with school leaders, teachers, special and remedial teachers, psychologists, coordinators, and parents in the special schools/särskola. In brief, the study about special school students attending mainstream classes from 1998 takes its point of departure in the idea that all knowledge development takes place in a social context. A special interest is taken in the school’s role for integration generally, and the role of special education particularly and on whose conditions the integration is taking place – the special school students' or the other students,' or may be equally, in the class. Pedagogical and social conditions for so called individually integrated special school students in elementary school classes were studied in eleven classes. Results: The findings are interpreted in a power perspective supported by Foucault and relationally by Vygotsky. The main part of the data consists of extensive descriptions of the eleven cases, here called integration situations. Conclusions: In summary, this study suggests that the possibilities for a special school student to get into the class community and fellowship and thereby be integrated with the class are to a high degree dependant on to what extent the student can take part in the pedagogical processes. The pedagogical situation for the special school student is affected not only by the class teacher and the support and measures undertaken but also by the other students in the class as they, in turn, are affected by how the special school student is acting. This mutual impact, which constitutes the integration process in itself, might result in a true integration if the special school student attains the status of being accepted on his/her own terms not only being cared for or cherished by some classmates. A special school student who is not accepted even on the terms of the class will often experience severe problems in the contacts with classmates and the school situation might thus be a mere placement.Keywords: integration/inclusion, mainstream school, power, special school students
Procedia PDF Downloads 2531743 Suture Biomaterials Development from Natural Fibers: Muga Silk (Antheraea assama) and Ramie (Boehmeria nivea)
Authors: Raghuram Kandimalla, Sanjeeb Kalita, Bhaswati Choudhury, Jibon Kotoky
Abstract:
The quest for developing an ideal suture material prompted our interest to develop a novel suture with advantageous characteristics to market available ones. We developed novel suture biomaterial from muga silk (Antheraea assama) and ramie (Boehmeria nivea) plant fiber. Field emission scanning electron microscopy (FE-SEM), energy-dispersive X-ray spectroscopy (EDX), attenuated total reflection fourier transform infrared spectroscopy (ATR-FTIR) and thermo gravimetric analysis (TGA) results revealed the physicochemical properties of the fibers which supports the suitability of fibers for suture fabrication. Tensile properties of the prepared sutures were comparable with market available sutures and it found to be biocompatible towards human erythrocytes and nontoxic to mammalian cells. The prepared sutures completely healed the superficial deep wound incisions within seven days in adult male wister rats leaving no rash and scar. Histopathology studies supports the wound healing ability of sutures, as rapid synthesis of collagen, connective tissue and other skin adnexal structures were observed within seven days of surgery. Further muga suture surface modified by exposing the suture to oxygen plasma which resulted in formation of nanotopography on suture surface. Broad spectrum antibiotic amoxicillin was functionalized on the suture surface to prepare an advanced antimicrobial muga suture. Surface hydrophilicity induced by oxygen plasma results in an increase in drug-impregnation efficiency of modified muga suture by 16.7%. In vitro drug release profiles showed continuous and prolonged release of amoxicillin from suture up to 336 hours. The advanced muga suture proves to be effective against growth inhibition of Staphylococcus aureus and Escherichia coli, whereas normal muga suture offers no antibacterial activity against both types of bacteria. In vivo histopathology studies and colony-forming unit count data revealed accelerated wound healing activity of advanced suture over normal one through rapid synthesis and proliferation of collagen, hair follicle and connective tissues.Keywords: sutures, biomaterials, silk, Ramie
Procedia PDF Downloads 3171742 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute
Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane
Abstract:
Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis
Procedia PDF Downloads 1291741 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 2501740 Anaerobic Digestion of Green Wastes at Different Solids Concentrations and Temperatures to Enhance Methane Generation
Authors: A. Bayat, R. Bello-Mendoza, D. G. Wareham
Abstract:
Two major categories of green waste are fruit and vegetable (FV) waste and garden and yard (GY) waste. Although, anaerobic digestions (AD) is able to manage FV waste; there is less confidence in the conditions for AD to handle GY wastes (grass, leaves, trees and bush trimmings); mainly because GY contains lignin and other recalcitrant organics. GY in the dry state (TS ≥ 15 %) can be digested at mesophilic temperatures; however, little methane data has been reported under thermophilic conditions, where conceivably better methane yields could be achieved. In addition, it is suspected that at lower solids concentrations, the methane yield could be increased. As such, the aim of this research is to find the temperature and solids concentration conditions that produce the most methane; under two different temperature regimes (mesophilic, thermophilic) and three solids states (i.e. 'dry', 'semi-dry' and 'wet'). Twenty liters of GY waste was collected from a public park located in the northern district in Tehran. The clippings consisted of freshly cut grass as well as dry branches and leaves. The GY waste was chopped before being fed into a mechanical blender that reduced it to a paste-like consistency. An initial TS concentration of approximately 38 % was achieved. Four hundred mL of anaerobic inoculum (average total solids (TS) concentration of 2.03 ± 0.131 % of which 73.4% were volatile solid (VS), soluble chemical oxygen demand (sCOD) of 4.59 ± 0.3 g/L) was mixed with the GY waste substrate paste (along with distilled water) to achieve a TS content of approximately 20 %. For comparative purposes, approximately 20 liters of FV waste was ground in the same manner as the GY waste. Since FV waste has a much higher natural water content than GY, it was dewatered to obtain a starting TS concentration in the dry solid-state range (TS ≥ 15 %). Three samples were dewatered to an average starting TS concentration of 32.71 %. The inoculum was added (along with distilled water) to dilute the initial FV TS concentrations down to semi-dry conditions (10-15 %) and wet conditions (below 10 %). Twelve 1-L batch bioreactors were loaded simultaneously with either GY or FV waste at TS solid concentrations ranging from 3.85 ± 1.22 % to 20.11 ± 1.23 %. The reactors were sealed and were operated for 30 days while being immersed in water baths to maintain a constant temperature of 37 ± 0.5 °C (mesophilic) or 55 ± 0.5 °C (thermophilic). A maximum methane yield of 115.42 (L methane/ kg VS added) was obtained for the GY thermophilic-wet AD combination. Methane yield was enhanced by 240 % compared to the GY waste mesophilic-dry condition. The results confirm that high temperature regimes and small solids concentrations are conditions that enhance methane yield from GY waste. A similar trend was observed for the anaerobic digestion of FV waste. Furthermore, a maximum value of VS (53 %) and sCOD (84 %) reduction was achieved during the AD of GY waste under the thermophilic-wet condition.Keywords: anaerobic digestion, thermophilic, mesophilic, total solids concentration
Procedia PDF Downloads 1431739 The Existential in a Practical Phenomenology Research: A Study on the Political Participation of Young Women
Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso
Abstract:
This communication presents proposed questions about the existential in research on the political participation of young women. The study follows a qualitative methodology, in particular, the applied hermeneutic phenomenological (AHP) method, and the general objective of the research is to give an account of the experience of political participation as a young woman. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. The ultimate objective of HP is to gain access to the meaning structures of lived experience by appropriating them, clarifying them, and reflectively making them explicit. Human experiences are always lived through existential: fundamental themes that are useful in exploring meaningful aspects of our life worlds. Everyone experiences the world through the existential of lived relationships, the lived body, lived space, lived time, and lived things. The phenomenological research, then, also tacitly asks about the existential. Existentials are universal themes useful for exploring significant aspects of our life world and of the particular phenomena under study. Four main existentials prove especially helpful as guides for reflection in the research process: relationship, body, space, and time. For example, in our case, we may ask ourselves how can the existentials of relationship, body, space, and time guide us in exploring the structures of meaning in the lived experience of political participation as a woman and a young person. The study is still not finished, as we are currently conducting phenomenological thematic analysis on the collected stories of lived experience. Yet, we have already identified some fragments of texts that show the existential in their experiences, which we will transcribe below. 1) Relationality - The experienced I-Other. It regards how relationships are experienced in our narratives about political participation as young women. One example would be: “As we had known each other for a long time, we understood each other with our eyes; we were all a little bit on the same page, thinking the same thing.” 2) Corporeality - The lived body. It regards how the lived body is experienced in activities of political participation as a young woman. One example would be: “My blood was boiling, but it was not the time to throw anything in their face, we had to look for solutions.”; “I had a lump in my throat and I wanted to cry.”. 3) Spatiality - The lived space. It regards how one experiences the lived space in political participation activities as a young woman. One example would be: “And the feeling I got when I saw [it] it's like watching everybody going into a mousetrap.” 4) Temporality - Lived time. It regards how one experiences the lived time in political participation activities as a young woman. One example would be: “Then, there were also meetings that went on forever…”Keywords: applied hermeneutic phenomenology, existentials, hermeneutics, phenomenology, political participation
Procedia PDF Downloads 991738 The Use of Gender-Fair Language in CS National Exams
Authors: Moshe Leiba, Doron Zohar
Abstract:
Computer Science (CS) and programming is still considered a boy’s club and is a male-dominated profession. This is also the case in high schools and higher education. In Israel, not different from the rest of the world, there are less than 35% of female students in CS studies that take the matriculation exams. The Israeli matriculation exams are written in a masculine form language. Gender-fair language (GFL) aims at reducing gender stereotyping and discrimination. There are several strategies that can be employed to make languages gender-fair and to treat women and men symmetrically (especially in languages with grammatical gender, among them neutralization and using the plural form. This research aims at exploring computer science teachers’ beliefs regarding the use of gender-fair language in exams. An exploratory quantitative research methodology was employed to collect the data. A questionnaire was administered to 353 computer science teachers. 58% female and 42% male. 86% are teaching for at least 3 years, with 59% of them have a teaching experience of 7 years. 71% of the teachers teach in high school, and 82% of them are preparing students for the matriculation exam in computer science. The questionnaire contained 2 matriculation exam questions from previous years and open-ended questions. Teachers were asked which form they think is more suited: (a) the existing form (mescaline), (b) using both gender full forms (e.g., he/she), (c) using both gender short forms, (d) plural form, (e) natural form, and (f) female form. 84% of the teachers recognized the need to change the existing mescaline form in the matriculation exams. About 50% of them thought that using the plural form was the best-suited option. When examining the teachers who are pro-change and those who are against, no gender differences or teaching experience were found. The teachers who are pro gender-fair language justified it as making it more personal and motivating for the female students. Those who thought that the mescaline form should remain argued that the female students do not complain and the change in form will not influence or affect the female students to choose to study computer science. Some even argued that the change will not affect the students but can only improve their sense of identity or feeling toward the profession (which seems like a misconception). This research suggests that the teachers are pro-change and believe that re-formulating the matriculation exams is the right step towards encouraging more female students to choose to study computer science as their major study track and to bridge the gap for gender equality. This should indicate a bottom-up approach, as not long after this research was conducted, the Israeli ministry of education decided to change the matriculation exams to gender-fair language using the plural form. In the coming years, with the transition to web-based examination, it is suggested to use personalization and adjust the language form in accordance with the student's gender.Keywords: compter science, gender-fair language, teachers, national exams
Procedia PDF Downloads 1161737 Effects of Bleaching Procedures on Dentine Sensitivity
Authors: Suhayla Reda Al-Banai
Abstract:
Problem Statement: Tooth whitening was used for over one hundred and fifty year. The question concerning the whiteness of teeth is a complex one since tooth whiteness will vary from individual to individual, dependent on age and culture, etc. Tooth whitening following treatment may be dependent on the type of whitening system used to whiten the teeth. There are a few side-effects to the process, and these include tooth sensitivity and gingival irritation. Some individuals may experience no pain or sensitivity following the procedure. Purpose: To systematically review the available published literature until 31st December 2021 to identify all relevant studies for inclusion and to determine whether there was any evidence demonstrating that the application of whitening procedures resulted in the tooth sensitivity. Aim: Systematically review the available published works of literature to identify all relevant studies for inclusion and to determine any evidence demonstrating that application of 10% & 15% carbamide peroxide in tooth whitening procedures resulted in tooth sensitivity. Material and Methods: Following a review of 70 relevant papers from searching both electronic databases (OVID MEDLINE and PUBMED) and hand searching of relevant written journals, 49 studies were identified, 42 papers were subsequently excluded, and 7 studies were finally accepted for inclusion. The extraction of data for inclusion was conducted by two reviewers. The main outcome measures were the methodology and assessment used by investigators to evaluate tooth sensitivity in tooth whitening studies. Results: The reported evaluation of tooth sensitivity during tooth whitening procedures was based on the subjective response of subjects rather than a recognized methodology for evaluating. One of the problems in evaluating was the lack of homogeneity in study design. Seven studies were included. The studies included essential features namely: randomized group, placebo controls, doubleblind and single-blind. Drop-out was obtained from two of included studies. Three of the included studies reported sensitivity at the baseline visit. Two of the included studies mentioned the exclusion criteria Conclusions: The results were inconclusive due to: Limited number of included studies, the study methodology, and evaluation of DS reported. Tooth whitening procedures adversely affect both hard and soft tissues in the oral cavity. Sideeffects are mild and transient in nature. Whitening solutions with greater than 10% carbamide peroxide causes more tooth sensitivity. Studies using nightguard vital bleaching with 10% carbamide peroxide reported two side effects tooth sensitivity and gingival irritation, although tooth sensitivity was more prevalent than gingival irritationKeywords: dentine, sensitivity, bleaching, carbamide peroxde
Procedia PDF Downloads 731736 Three Foci of Trust as Potential Mediators in the Association Between Job Insecurity and Dynamic Organizational Capability: A Quantitative, Exploratory Study
Authors: Marita Heyns
Abstract:
Job insecurity is a distressing phenomenon which has far reaching consequences for both employees and their organizations. Previously, much attention has been given to the link between job insecurity and individual level performance outcomes, while less is known about how subjectively perceived job insecurity might transfer beyond the individual level to affect performance of the organization on an aggregated level. Research focusing on how employees’ fear of job loss might affect the organization’s ability to respond proactively to volatility and drastic change through applying its capabilities of sensing, seizing, and reconfiguring, appears to be practically non-existent. Equally little is known about the potential underlying mechanisms through which job insecurity might affect the dynamic capabilities of an organization. This study examines how job insecurity might affect dynamic organizational capability through trust as an underling process. More specifically, it considered the simultaneous roles of trust at an impersonal (organizational) level as well as trust at an interpersonal level (in leaders and co-workers) as potential underlying mechanisms through which job insecurity might affect the organization’s dynamic capability to respond to opportunities and imminent, drastic change. A quantitative research approach and a stratified random sampling technique enabled the collection of data among 314 managers at four different plant sites of a large South African steel manufacturing organization undergoing dramatic changes. To assess the study hypotheses, the following statistical procedures were employed: Structural equation modelling was performed in Mplus to evaluate the measurement and structural models. The Chi-square values test for absolute fit as well as alternative fit indexes such as the Comparative Fit Index and the Tucker-Lewis Index, the Root Mean Square Error of Approximation and the Standardized Root Mean Square Residual were used as indicators of model fit. Composite reliabilities were calculated to evaluate the reliability of the factors. Finally, interaction effects were tested by using PROCESS and the construction of two-sided 95% confidence intervals. The findings indicate that job insecurity had a lower-than-expected detrimental effect on evaluations of the organization’s dynamic capability through the conducive buffering effects of trust in the organization and in its leaders respectively. In contrast, trust in colleagues did not seem to have any noticeable facilitative effect. The study proposes that both job insecurity and dynamic capability can be managed more effectively by also paying attention to factors that could promote trust in the organization and its leaders; some practical recommendations are given in this regard.Keywords: dynamic organizational capability, impersonal trust, interpersonal trust, job insecurity
Procedia PDF Downloads 951735 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement
Authors: Chrysoula Bousiouta
Abstract:
Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology
Procedia PDF Downloads 2531734 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 2121733 Surface Adjustments for Endothelialization of Decellularized Porcine Pericardium
Authors: M. Markova, E. Filova, O. Kaplan, R. Matejka, L. Bacakova
Abstract:
The porcine pericardium is used as a material for cardiac and aortic valves substitutes. Current biological aortic heart valve prosthesis have a limited lifetime period because they undergo degeneration. In order to make them more biocompatible and prolong their lifetime it is necessary to reseed the decellularized prostheses with endothelial cells and with valve interstitial cells. The endothelialization of the prosthesis-surface may be supported by suitable chemical surface modification of the prosthesis. The aim of this study is to prepare bioactive fibrin layers which would both support endothelialization of porcine pericardium and enhance differentiation and maturation of the endothelial cells seeded. As a material for surface adjustments we used layers of fibrin with/without heparin and some of them with adsorbed or chemically bound FGF2, VEGF or their combination. Fibrin assemblies were prepared in 24-well cell culture plate and were seeded with HSVEC (Human Saphenous Vein Endothelial Cells) at a density of 20,000 cells per well in EGM-2 medium with 0.5% FS and without heparin, without FGF2 and without VEGF; medium was supplemented with aprotinin (200 U/mL). As a control, surface polystyrene (PS) was used. Fibrin was also used as homogeneous impregnation of the decellularized porcine pericardium throughout the scaffolds. Morphology, density, and viability of the seeded endothelial cells were observed from micrographs after staining the samples by LIVE/DEAD cytotoxicity/viability assay kit on the days 1, 3, and 7. Endothelial cells were immunocytochemically stained for proteins involved in cell adhesion, i.e. alphaV integrin, vinculin, and VE-cadherin, markers of endothelial cells differentiation and maturation, i.e. von Willebrand factor and CD31, and for extracellular matrix proteins typically produced by endothelial cells, i.e. type IV collagen and laminin. The staining intensities were subsequently quantified using a software. HSVEC cells grew on each of the prepared surfaces better than on control surface. They reached confluency. The highest cell densities were obtained on the surface of fibrin with heparin and both grow factors used together. Intensity of alphaV integrins staining was highest on samples with remained fibrin layer, i.e. on layers with lower cell densities, i.e. on fibrin without heparin. Vinculin staining was apparent, but was rather diffuse, on fibrin with both FGF2 and VEGF and on control PS. Endothelial cells on all samples were positively stained for von Willebrand factor and CD31. VE-cadherin receptors clusters were best developed on fibrin with heparin and growth factors. Significantly stronger staining of type IV collagen was observed on fibrin with heparin and both growth factors. Endothelial cells on all samples produced laminin-1. Decellularized pericardium was homogeneously filled with fibrin structures. These fibrin-modified pericardium samples will be further seeded with cells and cultured in a bioreactor. Fibrin layers with/without heparin and with adsorbed or chemically bound FGF2, VEGF or their combination are good surfaces for endothelialization of cardiovascular prostheses or porcine pericardium based heart valves. Supported by the Ministry of Health, grants No15-29153A and 15-32497A, and the Grant Agency of the Czech Republic, project No. P108/12/G108.Keywords: aortic valves prosthesis, FGF2, heparin, HSVEC cells, VEGF
Procedia PDF Downloads 2711732 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 1371731 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements
Authors: Faustina Masocha, Vusani Moyo
Abstract:
For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory
Procedia PDF Downloads 1801730 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education
Authors: Ray Bryant
Abstract:
3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.Keywords: AI, technology, human trafficking, prevention
Procedia PDF Downloads 651729 Ultrafiltration Process Intensification for Municipal Wastewater Reuse: Water Quality, Optimization of Operating Conditions and Fouling Management
Authors: J. Yang, M. Monnot, T. Eljaddi, L. Simonian, L. Ercolei, P. Moulin
Abstract:
The application of membrane technology to wastewater treatment has expanded rapidly under increasing stringent legislation and environmental protection requirements. At the same time, the water resource is becoming precious, and water reuse has gained popularity. Particularly, ultrafiltration (UF) is a very promising technology for water reuse as it can retain organic matters, suspended solids, colloids, and microorganisms. Nevertheless, few studies dealing with operating optimization of UF as a tertiary treatment for water reuse on a semi-industrial scale appear in the literature. Therefore, this study aims to explore the permeate water quality and to optimize operating parameters (maximizing productivity and minimizing irreversible fouling) through the operation of a UF pilot plant under real conditions. The fully automatic semi-industrial UF pilot plant with periodic classic backwashes (CB) and air backwashes (AB) was set up to filtrate the secondary effluent of an urban wastewater treatment plant (WWTP) in France. In this plant, the secondary treatment consists of a conventional activated sludge process followed by a sedimentation tank. The UF process was thus defined as a tertiary treatment and was operated under constant flux. It is important to note that a combination of CB and chlorinated AB was used for better fouling management. The 200 kDa hollow fiber membrane was used in the UF module, with an initial permeability (for WWTP outlet water) of 600 L·m-2·h⁻¹·bar⁻¹ and a total filtration surface of 9 m². Fifteen filtration conditions with different fluxes, filtration times, and air backwash frequencies were operated for more than 40 hours of each to observe their hydraulic filtration performances. Through comparison, the best sustainable condition was flux at 60 L·h⁻¹·m⁻², filtration time at 60 min, and backwash frequency of 1 AB every 3 CBs. The optimized condition stands out from the others with > 92% water recovery rates, better irreversible fouling control, stable permeability variation, efficient backwash reversibility (80% for CB and 150% for AB), and no chemical washing occurrence in 40h’s filtration. For all tested conditions, the permeate water quality met the water reuse guidelines of the World Health Organization (WHO), French standards, and the regulation of the European Parliament adopted in May 2020, setting minimum requirements for water reuse in agriculture. In permeate: the total suspended solids, biochemical oxygen demand, and turbidity were decreased to < 2 mg·L-1, ≤ 10 mg·L⁻¹, < 0.5 NTU respectively; the Escherichia coli and Enterococci were > 5 log removal reduction, the other required microorganisms’ analysis were below the detection limits. Additionally, because of the COVID-19 pandemic, coronavirus SARS-CoV-2 was measured in raw wastewater of WWTP, UF feed, and UF permeate in November 2020. As a result, the raw wastewater was tested positive above the detection limit but below the quantification limit. Interestingly, the UF feed and UF permeate were tested negative to SARS-CoV-2 by these PCR assays. In summary, this work confirms the great interest in UF as intensified tertiary treatment for water reuse and gives operational indications for future industrial-scale production of reclaimed water.Keywords: semi-industrial UF pilot plant, water reuse, fouling management, coronavirus
Procedia PDF Downloads 1201728 Indicators of Sustainable Intensification: Views from British Stakeholders
Authors: N. Mahon, I. Crute, M. Di Bonito, E. Simmons, M. M. Islam
Abstract:
Growing interest in the concept of the sustainable intensification (SI) of agriculture has been shown by, national governments, transnational agribusinesses, intergovernmental organizations and research institutes, amongst others. This interest may be because SI is seen as a ‘third way’ for agricultural development, between the seemingly disparate paradigms of ‘intensive’ agriculture and more ‘sustainable’ forms of agriculture. However, there is a lack of consensus as to what SI means in practice and how it should be measured using indicators of change. This has led to growing confusion, disagreement and skepticism regarding the concept, especially amongst civil society organizations, both in the UK and other countries. This has prompted the need for bottom-up, participatory approaches to identify indicators of SI. Our aim is to identify the views of British stakeholders regarding the areas of agreement and disagreement as to what SI is and how it should be measured in the UK using indicators of change. Data for this investigation came from 32 semi-structured interviews, conducted between 2015 and 2016, with stakeholders from throughout the UK food system. In total 110 indicators of SI were identified. These indicators covered a wide variety of subjects including biophysical, social and political considerations. A number of indicators appeared to be widely applicable and were similar to those suggested in the global literature. These include indicators related to the management of the natural resources on which agriculture relies e.g., ‘Soil organic matter’, ‘Number of pollinators per hectare’ and ‘Depth of water table’. As well as those related to agricultural externalities, e.g., ‘Greenhouse gas emissions’ and ‘Concentrations of agro-chemicals in waterways’. However, many of the indicators were much more specific to the context of the UK. These included, ‘Areas of high nature value farmland’, ‘Length of hedgerows per hectare’ and ‘Age of farmers’. Furthermore, tensions could be seen when participants considered the relative importance of agricultural mechanization versus levels of agricultural employment, the pros and cons of intensive, housed livestock systems and value of wild biodiversity versus the desire to increase agricultural yields. These areas of disagreement suggest the need to carefully consider the trade-offs inherent in the concept. Our findings indicate that in order to begin to resolve the confusions surrounding SI it needs to be considered in a context specific manner, rather than as a single uniform concept. Furthermore, both the environmental and the social parameters in which agriculture operates need to be considered in order to operationalize SI in a meaningful way. We suggest that participatory approaches are key to this process, facilitating dialogue and collaborative-learning between all the stakeholders, allowing them to reach a shared vision for the future of agricultural development.Keywords: agriculture, indicators, participatory approach, sustainable intensification
Procedia PDF Downloads 2261727 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636
Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa
Abstract:
Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans
Procedia PDF Downloads 1441726 Asthma Nurse Specialist Improves the Management of Acute Asthma in a University Teaching Hospital: A Quality Improvement Project
Authors: T. Suleiman, C. Mchugh, H. Ranu
Abstract:
Background; Asthma continues to be associated with poor patient outcomes, including mortality. An audit of the management of acute asthma admissions in our hospital in 2020 found poor compliance with National Asthma and COPD Audit Project (NACAP) standards which set out to improve inpatient asthma care. Clinical nurse specialists have been shown to improve patient care across a range of specialties. In September 2021, an asthma Nurse Specialist (ANS) was employed in our hospital. Aim; To re-audit management of acute asthma admissions using NACAP standards and assess for quality improvement post-employment of an ANS. Methodology; NACAP standards are wide-reaching; therefore, we focused on ‘specific elements of good practice’ in addition to the provision of inhaled corticosteroids (ICS) on discharge. Medical notes were retrospectively requested from the hospital coding department and selected as per NACAP inclusion criteria. Data collection and entry into the NACAP database were carried out. As this was a clinical audit, ethics approval was not required. Results; Cycle 1 (pre-ANS) and 2 (post-ANS) of the audit included 20 and 32 patients, respectively, with comparable baseline demographics. No patients had a discharge bundle completed on discharge in cycle 1 vs. 84% of cases in cycle 2. Regarding specific components of the bundle, 25% of patients in cycle 1 had their inhaler technique checked vs. 91% in cycle 2. Furthermore, 80% of patients had maintenance medications reviewed in cycle 1 vs. 97% in cycle 2. Medication adherence was addressed in 20% of cases in cycle 1 vs. 88% of cases in cycle 2. Personalized asthma action plans were not issued or reviewed in any cases in cycle 1 as compared with 84% of cases in cycle 2. Triggers were discussed in 30% of cases in cycle 1 vs. 88% of cases in cycle 2. Tobacco dependence was addressed in 44% of cases in cycle 1 vs. 100% of cases in cycle 2. No patients in cycle 1 had community follow-up requested within 2 days vs. 81% of the patients in cycle 2. Similarly, 20% of the patients in cycle 1 vs. 88% of the patients in cycle 2 had a 4-week asthma clinic follow-up requested. 75% of patients in cycle 1 were the recipient of ICS on discharge compared with 94% of patients in cycle 2. Conclusion; Our quality improvement project demonstrates the utility of an ANS in improving performance in the management of acute asthma admissions, evidenced here through concordance with NACAP standards. Asthma is a complex condition with biological, psychological, and sociological components; therefore, ANS is a suitable intervention to improve concordance with guidelines. ANS likely impacted performance directly, for example, by checking inhaler technique, and indirectly as a safety net ensuring doctors included ICS on discharge.Keywords: asthma, nurse specialist, clinical audit, quality improvement
Procedia PDF Downloads 3831725 Ways Management of Foods Not Served to Consumers in Food Service Sector
Authors: Marzena Tomaszewska, Beata Bilska, Danuta Kolozyn-Krajewska
Abstract:
Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). 42 completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from 'always'/'every day' to 'never'). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers 'always' or 'usually' leave uneaten meals on their plates, and over 41% 'sometimes' do so. It was found additionally that food not used in food service sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%), and inedible products (fruit and vegetable peels, egg shells) (77.5%). Most frequently into the container dedicated only for food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces is allocate for animal feeds. Food waste in the food service sector still remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data pertaining to the subject. Incorrect ways of management with foods not served to consumers were observed. There is the need to develop the educational activities for employees and management in the context of food waste management in the food service sector. This publication has been developed under the contract with the National Center for Research and Development No Gospostrateg1/385753/1/NCBR/2018 for carrying out and funding of a project implemented as part of the 'The social and economic development of Poland in the conditions of globalizing markets - GOSPOSTRATEG' program entitled 'Developing a system for monitoring wasted food and an effective program to rationalize losses and reduce food wastage' (acronym PROM).Keywords: food waste, inedible products, plate waste, used deep-frying oil
Procedia PDF Downloads 123