Search results for: mass transfer coefficient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7778

Search results for: mass transfer coefficient

248 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites

Authors: Mohammad M. Khan, Pankaj Agarwal

Abstract:

The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.

Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM

Procedia PDF Downloads 152
247 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 190
246 User Experience Evaluation on the Usage of Commuter Line Train Ticket Vending Machine

Authors: Faishal Muhammad, Erlinda Muslim, Nadia Faradilla, Sayidul Fikri

Abstract:

To deal with the increase of mass transportation needs problem, PT. Kereta Commuter Jabodetabek (KCJ) implements Commuter Vending Machine (C-VIM) as the solution. For that background, C-VIM is implemented as a substitute to the conventional ticket windows with the purposes to make transaction process more efficient and to introduce self-service technology to the commuter line user. However, this implementation causing problems and long queues when the user is not accustomed to using the machine. The objective of this research is to evaluate user experience after using the commuter vending machine. The goal is to analyze the existing user experience problem and to achieve a better user experience design. The evaluation method is done by giving task scenario according to the features offered by the machine. The features are daily insured ticket sales, ticket refund, and multi-trip card top up. There 20 peoples that separated into two groups of respondents involved in this research, which consist of 5 males and 5 females each group. The experienced and inexperienced user to prove that there is a significant difference between both groups in the measurement. The user experience is measured by both quantitative and qualitative measurement. The quantitative measurement includes the user performance metrics such as task success, time on task, error, efficiency, and learnability. The qualitative measurement includes system usability scale questionnaire (SUS), questionnaire for user interface satisfaction (QUIS), and retrospective think aloud (RTA). Usability performance metrics shows that 4 out of 5 indicators are significantly different in both group. This shows that the inexperienced group is having a problem when using the C-VIM. Conventional ticket windows also show a better usability performance metrics compared to the C-VIM. From the data processing, the experienced group give the SUS score of 62 with the acceptability scale of 'marginal low', grade scale of “D”, and the adjective ratings of 'good' while the inexperienced group gives the SUS score of 51 with the acceptability scale of 'marginal low', grade scale of 'F', and the adjective ratings of 'ok'. This shows that both groups give a low score on the system usability scale. The QUIS score of the experienced group is 69,18 and the inexperienced group is 64,20. This shows the average QUIS score below 70 which indicate a problem with the user interface. RTA was done to obtain user experience issue when using C-VIM through interview protocols. The issue obtained then sorted using pareto concept and diagram. The solution of this research is interface redesign using activity relationship chart. This method resulted in a better interface with an average SUS score of 72,25, with the acceptable scale of 'acceptable', grade scale of 'B', and the adjective ratings of 'excellent'. From the time on task indicator of performance metrics also shows a significant better time by using the new interface design. Result in this study shows that C-VIM not yet have a good performance and user experience.

Keywords: activity relationship chart, commuter line vending machine, system usability scale, usability performance metrics, user experience evaluation

Procedia PDF Downloads 262
245 Correlation between the Levels of Some Inflammatory Cytokines/Haematological Parameters and Khorana Scores of Newly Diagnosed Ambulatory Cancer Patients

Authors: Angela O. Ugwu, Sunday Ocheni

Abstract:

Background: Cancer-associated thrombosis (CAT) is a cause of morbidity and mortality among cancer patients. Several risk factors for developing venous thromboembolism (VTE) also coexist with cancer patients, such as chemotherapy and immobilization, thus contributing to the higher risk of VTE in cancer patients when compared to non-cancer patients. This study aimed to determine if there is any correlation between levels of some inflammatory cytokines/haematological parameters and Khorana scores of newly diagnosed chemotherapy naïve ambulatory cancer patients (CNACP). Methods: This was a cross-sectional analytical study carried out from June 2021 to May 2022. Eligible newly diagnosed cancer patients 18 years and above (case group) were enrolled consecutively from the adult Oncology Clinics of the University of Nigeria Teaching Hospital, Ituku/Ozalla (UNTH). The control group was blood donors at UNTH Ituku/Ozalla, Enugu blood bank, and healthy members of the Medical and Dental Consultants Association of Nigeria (MDCAN), UNTH Chapter. Blood samples collected from the participants were assayed for IL-6, TNF-Alpha, and haematological parameters such as haemoglobin, white blood cell count (WBC), and platelet count. Data were entered into an Excel worksheet and were then analyzed using Statistical Package for Social Sciences (SPSS) computer software version 21.0 for windows. A P value of < 0.05 was considered statistically significant. Results: A total of 200 participants (100 cases and 100 controls) were included in the study. The overall mean age of the participants was 47.42 ±15.1 (range 20-76). The sociodemographic characteristics of the two groups, including age, sex, educational level, body mass index (BMI), and occupation, were similar (P > 0.05). Following One Way ANOVA, there were significant differences between the mean levels of interleukin-6 (IL-6) (p = 0.036) and tumor necrotic factor-α (TNF-α) (p = 0.001) in the three Khorana score groups of the case group. Pearson’s correlation analysis showed a significant positive correlation between the Khorana scores and IL-6 (r=0.28, p = 0.031), TNF-α (r= 0.254, p= 0.011), and PLR (r= 0.240, p=0.016). The mean serum levels of IL-6 were significantly higher in CNACP than in the healthy controls [8.98 (8-12) pg/ml vs. 8.43 (2-10) pg/ml, P=0.0005]. There were also significant differences in the mean levels of the haemoglobin (Hb) level (P < 0.001)); white blood cell (WBC) count ((P < 0.001), and platelet (PL) count (P = 0.005) between the two groups of participants. Conclusion: There is a significant positive correlation between the serum levels of IL-6, TNF-α, and PLR and the Khorana scores of CNACP. The mean serum levels of IL-6, TNF-α, PLR, WBC, and PL count were significantly higher in CNACP than in the healthy controls. Ambulatory cancer patients with high-risk Khorana scores may benefit from anti-inflammatory drugs because of the positive correlation with inflammatory cytokines. Recommendations: Ambulatory cancer patients with 2 Khorana scores may benefit from thromboprophylaxis since they have higher Khorana scores. A multicenter study with a heterogeneous population and larger sample size is recommended in the future to further elucidate the relationship between IL-6, TNF-α, PLR, and the Khorana scores among cancer patients in the Nigerian population.

Keywords: thromboprophylaxis, cancer, Khorana scores, inflammatory cytokines, haematological parameters

Procedia PDF Downloads 82
244 Nanoparticle Supported, Magnetically Separable Metalloporphyrin as an Efficient Retrievable Heterogeneous Nanocatalyst in Oxidation Reactions

Authors: Anahita Mortazavi Manesh, Mojtaba Bagherzadeh

Abstract:

Metalloporphyrins are well known to mimic the activity of monooxygenase enzymes. In this regard, metalloporphyrin complexes have been largely employed as valuable biomimetic catalysts, owing to the critical roles they play in oxygen transfer processes in catalytic oxidation reactions. Investigating in this area is based on different strategies to design selective, stable and high turnover catalytic systems. Immobilization of expensive metalloporphyrin catalysts onto supports appears to be a good way to improve their stability, selectivity and the catalytic performance because of the support environment and other advantages with respect to recovery, reuse. In other words, supporting metalloporphyrins provides a physical separation of active sites, thus minimizing catalyst self-destruction and dimerization of unhindered metalloporphyrins. Furthermore, heterogeneous catalytic oxidations have become an important target since their process are used in industry, helping to minimize the problems of industrial waste treatment. Hence, the immobilization of these biomimetic catalysts is much desired. An attractive approach is the preparation of the heterogeneous catalyst involves immobilization of complexes on silica coated magnetic nano-particles. Fe3O4@SiO2 magnetic nanoparticles have been studied extensively due to their superparamagnetism property, large surface area to volume ratio and easy functionalization. Using heterogenized homogeneous catalysts is an attractive option to facile separation of catalyst, simplified product work-up and continuity of catalytic system. Homogeneous catalysts immobilized on magnetic nanoparticles (MNPs) surface occupy a unique position due to combining the advantages of both homogeneous and heterogeneous catalysts. In addition, superparamagnetic nature of MNPs enable very simple separation of the immobilized catalysts from the reaction mixture using an external magnet. In the present work, an efficient heterogeneous catalyst was prepared by immobilizing manganese porphyrin on functionalized magnetic nanoparticles through the amino propyl linkage. The prepared catalyst was characterized by elemental analysis, FT-IR spectroscopy, X-ray powder diffraction, atomic absorption spectroscopy, UV-Vis spectroscopy, and scanning electron microscopy. Application of immobilized metalloporphyrin in the oxidation of various organic substrates was explored using Gas chromatographic (GC) analyses. The results showed that the supported Mn-porphyrin catalyst (Fe3O4@SiO2-NH2@MnPor) is an efficient and reusable catalyst in oxidation reactions. Our catalytic system exhibits high catalytic activity in terms of turnover number (TON) and reaction conditions. Leaching and recycling experiments revealed that nanocatalyst can be recovered several times without loss of activity and magnetic properties. The most important advantage of this heterogenized catalytic system is the simplicity of the catalyst separation in which the catalyst can be separated from the reaction mixture by applying a magnet. Furthermore, the separation and reuse of the magnetic Fe3O4 nanoparticles were very effective and economical.

Keywords: Fe3O4 nanoparticle, immobilized metalloporphyrin, magnetically separable nanocatalyst, oxidation reactions

Procedia PDF Downloads 300
243 Is Liking for Sampled Energy-Dense Foods Mediated by Taste Phenotypes?

Authors: Gary J. Pickering, Sarah Lucas, Catherine E. Klodnicki, Nicole J. Gaudette

Abstract:

Two taste pheno types that are of interest in the study of habitual diet-related risk factors and disease are 6-n-propylthiouracil (PROP) responsiveness and thermal tasting. Individuals differ considerable in how intensely they experience the bitterness of PROP, which is partially explained by three major single nucleotide polymorphisms associated with the TAS2R38 gene. Importantly, this variable responsiveness is a useful proxy for general taste responsiveness, and links to diet-related disease risk, including body mass index, in some studies. Thermal tasting - a newly discovered taste phenotype independent of PROP responsiveness - refers to the capacity of many individuals to perceive phantom tastes in response to lingual thermal stimulation, and is linked with TRPM5 channels. Thermal tasters (TTs) also experience oral sensations more intensely than thermal non-tasters (TnTs), and this was shown to associate with differences in self-reported food preferences in a previous survey from our lab. Here we report on two related studies, where we sought to determine whether PROP responsiveness and thermal tasting would associate with perceptual differences in the oral sensations elicited by sampled energy-dense foods, and whether in turn this would influence liking. We hypothesized that hyper-tasters (thermal tasters and individuals who experience PROP intensely) would (a) rate sweet and high-fat foods more intensely than hypo-tasters, and (b) would differ from hypo-tasters in liking scores. (Liking has been proposed recently as a more accurate measure of actual food consumption). In Study 1, a range of energy-dense foods and beverages, including table cream and chocolate, was assessed by 25 TTs and 19 TnTs. Ratings of oral sensation intensity and overall liking were obtained using gVAS and gDOL scales, respectively. TTs and TnTs did not differ significantly in intensity ratings for most stimuli (ANOVA). In a 2nd study, 44 female participants sampled 22 foods and beverages, assessing them for intensity of oral sensations (gVAS) and overall liking (9-point hedonic scale). TTs (n=23) rated their overall liking of creaminess and milk products lower than did TnTs (n=21), and liked milk chocolate less. PROP responsiveness was negatively correlated with liking of food and beverages belonging to the sweet or sensory food grouping. No other differences in intensity or liking scores between hyper- and hypo-tasters were found. Taken overall, our results are somewhat unexpected, lending only modest support to the hypothesis that these taste phenotypes associate with energy-dense food liking and consumption through differences in the oral sensations they elicit. Reasons for this lack of concordance with expectations and some prior literature are discussed, and suggestions for future research are advanced.

Keywords: taste phenotypes, sensory evaluation, PROP, thermal tasting, diet-related health risk

Procedia PDF Downloads 459
242 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 313
241 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training

Authors: Carin Chuang, Kuan-Chou Chen

Abstract:

An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.

Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation

Procedia PDF Downloads 141
240 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza

Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue

Abstract:

Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.

Keywords: COVID-19, Fastai, influenza, transfer network

Procedia PDF Downloads 144
239 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home

Authors: Laura M. F. Bertens

Abstract:

The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.

Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace

Procedia PDF Downloads 159
238 Concentrations of Leptin, C-Peptide and Insulin in Cord Blood as Fetal Origins of Insulin Resistance and Their Effect on the Birth Weight of the Newborn

Authors: R. P. Hewawasam, M. H. A. D. de Silva, M. A. G. Iresha

Abstract:

Obesity is associated with an increased risk of developing insulin resistance. Insulin resistance often progresses to type-2 diabetes mellitus and is linked to a wide variety of other pathophysiological features including hypertension, hyperlipidemia, atherosclerosis (metabolic syndrome) and polycystic ovarian syndrome. Macrosomia is common in infants born to not only women with gestational diabetes mellitus but also non-diabetic obese women. During the past two decades, obesity in children and adolescents has risen significantly in Asian populations including Sri Lanka. There is increasing evidence to believe that infants who are born large for gestational age (LGA) are more likely to be obese in childhood. It is also established from previous studies that Asian populations have higher percentage body fat at a lower body mass index compared to Caucasians. High leptin levels in cord blood have been reported to correlate with fetal adiposity at birth. Previous studies have also shown that cord blood C-peptide and insulin levels are significantly and positively correlated with birth weight. Therefore, the objective of this preliminary study was to determine the relationship between parameters of fetal insulin resistance such as leptin, C-peptide and insulin and the birth weight of the newborn in a study population in Southern Sri Lanka. Umbilical cord blood was collected from 90 newborns and the concentration of insulin, leptin, and C-peptide were measured by ELISA technique. Birth weight, length, occipital frontal, chest, hip and calf circumferences of newborns were measured and characteristics of the mother such as age, height, weight before pregnancy and weight gain were collected. The relationship between insulin, leptin, C-peptide, and anthropometrics were assessed by Pearson’s correlation while the Mann-Whitney U test was used to assess the differences in cord blood leptin, C-peptide, and insulin levels between groups. A significant difference (p < 0.001) was observed between the insulin levels of infants born LGA (18.73 ± 0.64 µlU/ml) and AGA (13.08 ± 0.43 µlU/ml). Consistently, A significant increase in concentration (p < 0.001) was observed in C-peptide levels of infants born LGA (9.32 ± 0.77 ng/ml) compared to AGA (5.44 ± 0.19 ng/ml). Cord blood leptin concentration of LGA infants (12.67 ng/mL ± 1.62) was significantly higher (p < 0.001) compared to the AGA infants (7.10 ng/mL ± 0.97). Significant positive correlations (p < 0.05) were observed among cord leptin levels and the birth weight, pre-pregnancy maternal weight and BMI between the infants of AGA and LGA. Consistently, a significant positive correlation (p < 0.05) was observed between the birth weight and the C peptide concentration. Significantly high concentrations of leptin, C-peptide and insulin levels in the cord blood of LGA infants suggest that they may be involved in regulating fetal growth. Although previous studies suggest comparatively high levels of body fat in the Asian population, values obtained in this study are not significantly different from values previously reported from Caucasian populations. According to this preliminary study, maternal pre-pregnancy BMI and weight may contribute as significant indicators of cord blood parameters of insulin resistance and possibly the birth weight of the newborn.

Keywords: large for gestational age, leptin, C-peptide, insulin

Procedia PDF Downloads 158
237 Preliminary Study of Water-Oil Separation Process in Three-Phase Separators Using Factorial Experimental Designs and Simulation

Authors: Caroline M. B. De Araujo, Helenise A. Do Nascimento, Claudia J. Da S. Cavalcanti, Mauricio A. Da Motta Sobrinho, Maria F. Pimentel

Abstract:

Oil production is often followed by the joint production of water and gas. During the journey up to the surface, due to severe conditions of temperature and pressure, the mixing between these three components normally occurs. Thus, the three phases separation process must be one of the first steps to be performed after crude oil extraction, where the water-oil separation is the most complex and important step, since the presence of water into the process line can increase corrosion and hydrates formation. A wide range of methods can be applied in order to proceed with oil-water separation, being more commonly used: flotation, hydrocyclones, as well as the three phase separator vessels. Facing what has been presented so far, it is the aim of this paper to study a system consisting of a three-phase separator, evaluating the influence of three variables: temperature, working pressure and separator type, for two types of oil (light and heavy), by performing two factorial design plans 23, in order to find the best operating condition. In this case, the purpose is to obtain the greatest oil flow rate in the product stream (m3/h) as well as the lowest percentage of water in the oil stream. The simulation of the three-phase separator was performed using Aspen Hysys®2006 simulation software in stationary mode, and the evaluation of the factorial experimental designs was performed using the software Statistica®. From the general analysis of the four normal probability plots of effects obtained, it was observed that interaction effects of two and three factors did not show statistical significance at 95% confidence, since all the values were very close to zero. Similarly, the main effect "separator type" did not show significant statistical influence in any situation. As in this case, it has been assumed that the volumetric flow of water, oil and gas were equal in the inlet stream, the effect separator type, in fact, may not be significant for the proposed system. Nevertheless, the main effect “temperature” was significant for both responses (oil flow rate and mass fraction of water in the oil stream), considering both light and heavy oil, so that the best operation condition occurs with the temperature at its lowest level (30oC), since the higher the temperature, the liquid oil components pass into the vapor phase, going to the gas stream. Furthermore, the higher the temperature, the higher the formation water vapor, so that ends up going into the lighter stream (oil stream), making the separation process more difficult. Regarding the “working pressure”, this effect showed to be significant only for the oil flow rate, so that the best operation condition occurs with the pressure at its highest level (9bar), since a higher operating pressure, in this case, indicated a lower pressure drop inside the vessel, generating lower level of turbulence inside the separator. In conclusion, the best-operating condition obtained for the proposed system, at the studied range, occurs for temperature is at its lowest level and the working pressure is at its highest level.

Keywords: factorial experimental design, oil production, simulation, three-phase separator

Procedia PDF Downloads 290
236 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer

Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu

Abstract:

Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.

Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer

Procedia PDF Downloads 64
235 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 232
234 Advanced Statistical Approaches for Identifying Predictors of Poor Blood Pressure Control: A Comprehensive Analysis Using Multivariable Logistic Regression and Generalized Estimating Equations (GEE)

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Effective management of hypertension remains a critical public health challenge, particularly among racially and ethnically diverse populations. This study employs sophisticated statistical models to rigorously investigate the predictors of poor blood pressure (BP) control, with a specific focus on demographic, socioeconomic, and clinical risk factors. Leveraging a large sample of 19,253 adults drawn from the National Health and Nutrition Examination Survey (NHANES) across three distinct time periods (2013-2014, 2015-2016, and 2017-2020), we applied multivariable logistic regression and generalized estimating equations (GEE) to account for the clustered structure of the data and potential within-subject correlations. Our multivariable models identified significant associations between poor BP control and several key predictors, including race/ethnicity, age, gender, body mass index (BMI), prevalent diabetes, and chronic kidney disease (CKD). Non-Hispanic Black individuals consistently exhibited higher odds of poor BP control across all periods (OR = 1.99; 95% CI: 1.69, 2.36 for the overall sample; OR = 2.33; 95% CI: 1.79, 3.02 for 2017-2020). Younger age groups demonstrated substantially lower odds of poor BP control compared to individuals aged 75 and older (OR = 0.15; 95% CI: 0.11, 0.20 for ages 18-44). Men also had a higher likelihood of poor BP control relative to women (OR = 1.55; 95% CI: 1.31, 1.82), while BMI ≥35 kg/m² (OR = 1.76; 95% CI: 1.40, 2.20) and the presence of diabetes (OR = 2.20; 95% CI: 1.80, 2.68) were associated with increased odds of poor BP management. Further analysis using GEE models, accounting for temporal correlations and repeated measures, confirmed the robustness of these findings. Notably, individuals with chronic kidney disease displayed markedly elevated odds of poor BP control (OR = 3.72; 95% CI: 3.09, 4.48), with significant differences across the survey periods. Additionally, higher education levels and better self-reported diet quality were associated with improved BP control. College graduates exhibited a reduced likelihood of poor BP control (OR = 0.64; 95% CI: 0.46, 0.89), particularly in the 2015-2016 period (OR = 0.48; 95% CI: 0.28, 0.84). Similarly, excellent dietary habits were associated with significantly lower odds of poor BP control (OR = 0.64; 95% CI: 0.44, 0.94), underscoring the importance of lifestyle factors in hypertension management. In conclusion, our findings provide compelling evidence of the complex interplay between demographic, clinical, and socioeconomic factors in predicting poor BP control. The application of advanced statistical techniques such as GEE enhances the reliability of these results by addressing the correlated nature of repeated observations. This study highlights the need for targeted interventions that consider racial/ethnic disparities, clinical comorbidities, and lifestyle modifications in improving BP control outcomes.

Keywords: hypertension, blood pressure, NHANES, generalized estimating equations

Procedia PDF Downloads 16
233 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation

Procedia PDF Downloads 227
232 The Antioxidant Activity of Grape Chkhaveri and Its Wine Cultivated in West Georgia (Adjaria)

Authors: Maia Kharadze, Indira Djaparidze, Maia Vanidze, Aleko Kalandia

Abstract:

Modern scientific world studies chemical components and antioxidant activity of different kinds of vines according to their breed purity and location. To our knowledge, this kind of research has not been conducted in Georgia yet. The object of our research was to study Chkhaveri vine, which is included in the oldest varieties of the Black Sea basin vine. We have studied different-altitude Chkaveri grapes, juice, and wine (half dry rose-colored produced with European technologies) and their technical markers, qualitative and quantitive composition of their biologically active compounds and their antioxidant activity. We were determining the amount of phenols using Folin-Ciocalteu reagent, Flavonoids, Catechins and Anthocyanins using Spectral method and antioxidant activity using DPPH method. Several compounds were identified using –HPLC-UV-Vis, UPLC-MS methods. Six samples of Chkhaveri species– 5, 300, 360, 380, 400, 780 meter altitudes were taken and analyzed. The sample taken from 360 m altitude is distinguished by its cluster mass (383.6 grams) and high amount of sugar (20.1%). The sample taken from the five-meter altitude is distinguished by having high acidity (0.95%). Unlike other grapes varieties, such concentration of sugar and relatively low levels of citric acid ultimately leads to Chkhaveri wine individuality. Biologically active compounds of Chkhaveri were researched in 2014, 2015, 2016. The amount of total phenols in samples of 2016 fruit varies from 976.7 to 1767.0 mg/kg. Amount of Anthocians is 721.2-1630.2 mg/kg, and the amount of Flavanoids varies from 300.6 to 825.5 mg/kg. Relatively high amount of anthocyanins was found in the Chkhaveri at 780-meter altitude - 1630.2 mg/kg. Accordingly, the amount of Phenols and Flavanoids is high- 1767.9 mg/kg and 825.5 mg/kg. These characteristics are low in samples gathered from 5 meters above sea level, Anthocyanins-721.2 mg/ kg, total Phenols-976.7 mg/ kg, and Flavanoids-300.6 mg/kg. The highest amount of bioactive compounds can be found in the Chkhaveri samples of high altitudes because with rising height environment becomes harsh, the plant has to develop a better immune system using Phenolic compounds. The technology that is used for the production of wine also plays a huge role in the composition of the final product. Optimal techniques of maceration and ageing were worked out. While squeezing Chkhaveri, there are no anthocyanins in the juice. However, the amount of Anthocyanins rises during maceration. After the fermentation of dregs, the amount of anthocyanins is 55%, 521.3 gm/l, total Phenols 80% 1057.7 mg/l and Flavanoids 23.5 mg/l. Antioxidant activity of samples was also determined using the effect of 50% inhibition of the samples. All samples have high antioxidant activity. For instance, in samples at 780 meters above the sea-level antioxidant activity was 53.5%. It is relatively high compared to the sample at 5 m above sea-level with the antioxidant activity of 30.5%. Thus, there is a correlation between the amount Anthocyanins and antioxidant activity. The designated project has been fulfilled by financial support of the Georgia National Science Foundation (Grant AP/96/13, Grant 216816), Any idea in this publication is possessed by the author and may not represent the opinion of the Georgia National Science Foundation.

Keywords: antioxidants, bioactive content, wine, chkhaveri

Procedia PDF Downloads 230
231 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges

Authors: Rafat Fazeli, Reza Fazeli

Abstract:

This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.

Keywords: the welfare state, social wage, The United States, limits to growth

Procedia PDF Downloads 211
230 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals

Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly

Abstract:

Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.

Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery

Procedia PDF Downloads 318
229 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content

Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović

Abstract:

Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.

Keywords: genotype × environment interaction, phytic acid, stability, variability

Procedia PDF Downloads 395
228 Placenta A Classical Caesarean Section with Peripartum Hysterectomy at 27+3 Weeks Gestation For Placnta Accreta

Authors: Huda Abdelrhman Osman Ahmed, Paul Feyi Waboso

Abstract:

Introduction: Placenta accreta spectrum (PAS) disorders present a significant challenge in obstetric management due to the high risk of hemorrhage and potential complications at delivery. This case describes a 27+3 weeks gestation in a patient with placenta accreta managed with classical cesarean section and peripartum hysterectomy. Case Description: AGravida 4P3 patient presented at 27+3 weeks gestation with painless, unprovoked vaginal bleeding and an estimated blood loss (EBL) of 300 mL. At the 20+5 week anomaly scan, a placenta previa was identified anterior, covering the os anterior uterus and containing lacunae with signs of myometrial thinning. At a 24+1 week scan conducted at a tertiary center, further imaging indicated placenta increta with invasion into the myometrium and potential areas of placenta percreta. The patient’s past obstetric history included three previous cesarean sections, with no significant medical or surgical history. Social history revealed heavy smoking but no alcohol use. No drug allergies were reported. Given the risks associated with PAS, a management plan was formulated, including an MRI at a later stage and cesarean delivery with a possible hysterectomy between 34-36 weeks. However, at 27+3 weeks, the patient experienced another episode of vaginal bleeding EBL 500 ml, necessitating immediate intervention. Management: As the patient was unstable, she was not transferred to the tertiary center. Completed and informed consent was obtained. MDT planning-group and cross-matching 4 units, uterotonics. Tranexamic acid blood products, cryo, cell salvage, 2 obstetric consultants and an anesthetic consultant, blood bank aware and hematologist. HDU bed and ITU availability. This study assisted in performing a classical Caesarean section, Where the urologist inserted JJ ureteric stents. Following this, we also assisted in a total abdominal hysterectomy with the conservation of ovaries. 4 units RBC and 1 unit FFP were transfused. The total blood loss was 2.3 L. Outcome: The procedure successfully achieved hemostasis, and the neonate was delivered with subsequent transfer to a neonatal intensive care unit for management. The patient’s postoperative course was monitored closely with no immediate complications. Discussion: This case highlights the complexity and urgency in managing placenta accreta spectrum disorders, particularly with the added challenges posed by remote location and limited tertiary support. The need for rapid decision-making and interdisciplinary coordination is emphasized in such high-risk obstetric cases. The case also underscores the potential for surgical intervention and the importance of family involvement in emergent care decisions. Conclusion: Placenta accreta spectrum disorders demand meticulous planning and timely intervention. This case contributes to understanding PAS management at earlier gestational ages and provides insights into the challenges posed by access to tertiary care, especially in urgent situations.

Keywords: Accreta, Hysterectomy, 3MDT, prematurity

Procedia PDF Downloads 14
227 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage

Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki

Abstract:

Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.

Keywords: butterflies, shrinking body size, museum specimens, climate change

Procedia PDF Downloads 13
226 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 143
225 Comparison of the Chest X-Ray and Computerized Tomography Scans Requested from the Emergency Department

Authors: Sahabettin Mete, Abdullah C. Hocagil, Hilal Hocagil, Volkan Ulker, Hasan C. Taskin

Abstract:

Objectives and Goals: An emergency department is a place where people can come for a multitude of reasons 24 hours a day. As it is an easy, accessible place, thanks to self-sacrificing people who work in emergency departments. But the workload and overcrowding of emergency departments are increasing day by day. Under these circumstances, it is important to choose a quick, easily accessible and effective test for diagnosis. This results in laboratory and imaging tests being more than 40% of all emergency department costs. Despite all of the technological advances in imaging methods and available computerized tomography (CT), chest X-ray, the older imaging method, has not lost its appeal and effectiveness for nearly all emergency physicians. Progress in imaging methods are very convenient, but physicians should consider the radiation dose, cost, and effectiveness, as well as imaging methods to be carefully selected and used. The aim of the study was to investigate the effectiveness of chest X-ray in immediate diagnosis against the advancing technology by comparing chest X-ray and chest CT scan results of the patients in the emergency department. Methods: Patients who applied to Bulent Ecevit University Faculty of Medicine’s emergency department were investigated retrospectively in between 1 September 2014 and 28 February 2015. Data were obtained via MIAMED (Clear Canvas Image Server v6.2, Toronto, Canada), information management system which patients’ files are saved electronically in the clinic, and were retrospectively scanned. The study included 199 patients who were 18 or older, had both chest X-ray and chest CT imaging. Chest X-ray images were evaluated by the emergency medicine senior assistant in the emergency department, and the findings were saved to the study form. CT findings were obtained from already reported data by radiology department in the clinic. Chest X-ray was evaluated with seven questions in terms of technique and dose adequacy. Patients’ age, gender, application complaints, comorbid diseases, vital signs, physical examination findings, diagnosis, chest X-ray findings and chest CT findings were evaluated. Data saved and statistical analyses have made via using SPSS 19.0 for Windows. And the value of p < 0.05 were accepted statistically significant. Results: 199 patients were included in the study. In 38,2% (n=76) of all patients were diagnosed with pneumonia and it was the most common diagnosis. The chest X-ray imaging technique was appropriate in patients with the rate of 31% (n=62) of all patients. There was not any statistically significant difference (p > 0.05) between both imaging methods (chest X-ray and chest CT) in terms of determining the rates of displacement of the trachea, pneumothorax, parenchymal consolidation, increased cardiothoracic ratio, lymphadenopathy, diaphragmatic hernia, free air levels in the abdomen (in sections including the image), pleural thickening, parenchymal cyst, parenchymal mass, parenchymal cavity, parenchymal atelectasis and bone fractures. Conclusions: When imaging findings, showing cases that needed to be quickly diagnosed, were investigated, chest X-ray and chest CT findings were matched at a high rate in patients with an appropriate imaging technique. However, chest X-rays, evaluated in the emergency department, were frequently taken with an inappropriate technique.

Keywords: chest x-ray, chest computerized tomography, chest imaging, emergency department

Procedia PDF Downloads 193
224 The Impact of Regulation of Energy Prices on Public Trust in Europe during Energy Crisis: A Cross-Sectional Study in the Aftermath of the Russia-Ukraine Conflict

Authors: Sempiga Olivier, Dominika Latusek-Jurczak

Abstract:

The conflict in Ukraine has had far-reaching economic consequences, not only for the countries directly involved in it but also for their trading partners and allies, and on the global economy in general. Different European Union (EU) countries, being some of Ukraine and Russia's major trading partners, have also felt the impact of the conflict on their economy. In a special way, the energy sector has suffered the most due to the fact that Russia is a huge exporter of gas and other energy sources on which rely European countries. Energy is a locomotive of the economy and once energy prices skyrocket there is a spill over effects in other areas causing different commodities’ prices to rise thereby affecting people’s social economic lifestyles. To minimise the impact energy crisis’ socio-political and economic consequences, the EU and countries have tightened their regulatory mechanisms to stop some energy firms exploit the crisis at the expense of the vulnerable mass. The key question is to what extent these regulatory instruments put in place during the energy crisis times have an affect on citizen trust in the governing institutions. The question is of paramount importance after years of declining trust in the EU and in most countries in Europe. Earlier research have analysed how wars or global political risks relate to citizen trust in government and organizations but very few empirical research have examined the relationship between regulatory instruments during the time of crisis on citizen trust in government and institutions. Using data from INSEE (the French National Institute of Statistics and Economic Studies) and European Social Survey (ESS), it carry out a multilinear regression analysis and investigate the impact of regulation both from the EU and different countries on energy prices on citizen trust. To understand the dynamics between regulatory actions during crises and citizen trust, this study draws on the theoretical framework of institutional trust and regulatory legitimacy. Institutional trust theory posits that citizens’ trust in government and institutions is influenced by perceptions of fairness, transparency, and efficacy in governance. Regulatory legitimacy, a related concept, suggests that regulatory measures, especially in response to crises, are more effective when perceived as just, necessary, and in the public interest. Results of this cross sectional study show that regulatory frameworks strongly affect the levels of trust, the association varying from strong to moderate depending on countries and period. This study contributes to the understanding of the vital relationship between regulatory measures implemented during crises and citizen trust in government institutions. By identifying the conditions under which trust is fostered or eroded, the findings provide policymakers with valuable insights into effective strategies for enhancing public confidence, ultimately guiding interventions that can mitigate the socio-political impacts of future energy crises.

Keywords: energy crisis, price, regulation, russia-Ukraine conflict, trust

Procedia PDF Downloads 11
223 Implementing Equitable Learning Experiences to Increase Environmental Awareness and Science Proficiency in Alabama’s Schools and Communities

Authors: Carly Cummings, Maria Soledad Peresin

Abstract:

Alabama has a long history of racial injustice and unsatisfactory educational performance. In the 1870s Jim Crow laws segregated public schools and disproportionally allocated funding and resources to white institutions across the South. Despite the Supreme Court ruling to integrate schools following Brown vs. the Board of Education in 1954, Alabama’s school system continued to exhibit signs of segregation, compounded by “white flight” and the establishment of exclusive private schools, which still exist today. This discriminatory history has had a lasting impact of the state’s education system, reflected in modern school demographics and achievement data. It is well known that Alabama struggles with education performance, especially in science education. On average, minority groups scored the lowest in science proficiency. In Alabama, minority populations are concentrated in a region known as the Black Belt, which was once home to countless slave plantations and was the epicenter of the Civil Rights Movement. Today the Black Belt is characterized by a high density of woodlands and plays a significant role in Alabama’s leading economic industry-forest products. Given the economic importance of forestry and agriculture to the state, environmental science proficiency is essential to its stability; however, it is neglected in areas where it is needed most. To better understand the inequity of science education within Alabama, our study first investigates how geographic location, demographics and school funding relate to science achievement scores using ArcGIS and Pearson’s correlation coefficient. Additionally, our study explores the implementation of a relevant, problem-based, active learning lesson in schools. Relevant learning engages students by connecting material to their personal experiences. Problem-based active learning involves real-world problem-solving through hands-on experiences. Given Alabama’s significant woodland coverage, educational materials on forest products were developed with consideration of its relevance to students, especially those located in the Black Belt. Furthermore, to incorporate problem solving and active learning, the lesson centered around students using forest products to solve environmental challenges, such as water pollution- an increasing challenge within the state due to climate change. Pre and post assessment surveys were provided to teachers to measure the effectiveness of the lesson. In addition to pedagogical practices, community and mentorship programs are known to positively impact educational achievements. To this end, our work examines the results of surveys measuring educational professionals’ attitudes toward a local mentorship group within the Black Belt and its potential to address environmental and science literacy. Additionally, our study presents survey results from participants who attended an educational community event, gauging its effectiveness in increasing environmental and science proficiency. Our results demonstrate positive improvements in environmental awareness and science literacy with relevant pedagogy, mentorship, and community involvement. Implementing these practices can help provide equitable and inclusive learning environments and can better equip students with the skills and knowledge needed to bridge this historic educational gap within Alabama.

Keywords: equitable education, environmental science, environmental education, science education, racial injustice, sustainability, rural education

Procedia PDF Downloads 69
222 Effects of the Exit from Budget Support on Good Governance: Findings from Four Sub-Saharan Countries

Authors: Magdalena Orth, Gunnar Gotz

Abstract:

Background: Domestic accountability, budget transparency and public financial management (PFM) are considered vital components of good governance in developing countries. The aid modality budget support (BS) promotes these governance functions in developing countries. BS engages in political decision-making and provides financial and technical support to poverty reduction strategies of the partner countries. Nevertheless, many donors have withdrawn their support from this modality due to cases of corruption, fraud or human rights violations. This exit from BS is leaving a finance and governance vacuum in the countries. The evaluation team analyzed the consequences of terminating the use of this modality and found particularly negative effects for good governance outcomes. Methodology: The evaluation uses a qualitative (theory-based) approach consisting of a comparative case study design, which is complemented by a process-tracing approach. For the case studies, the team conducted over 100 semi-structured interviews in Malawi, Uganda, Rwanda and Zambia and used four country-specific, tailor-made budget analysis. In combination with a previous DEval evaluation synthesis on the effects of BS, the team was able to create a before-and-after comparison that yields causal effects. Main Findings: In all four countries domestic accountability and budget transparency declined if other forms of pressure are not replacing BS´s mutual accountability mechanisms. In Malawi a fraud scandal created pressure from the society and from donors so that accountability was improved. In the other countries, these pressure mechanisms were absent so that domestic accountability declined. BS enables donors to actively participate in political processes of the partner country as donors transfer funds into the treasury of the partner country and conduct a high-level political dialogue. The results confirm that the exit from BS created a governance vacuum that, if not compensated through external/internal pressure, leads to a deterioration of good governance. For example, in the case of highly aid dependent Malawi did the possibility of a relaunch of BS provide sufficient incentives to push for governance reforms. Overall the results show that the three good governance areas are negatively affected by the exit from BS. This stands in contrast to positive effects found before the exit. The team concludes that the relationship is causal, because the before-and-after comparison coherently shows that the presence of BS correlates with positive effects and the absence with negative effects. Conclusion: These findings strongly suggest that BS is an effective modality to promote governance and its abolishment is likely to cause governance disruptions. Donors and partner governments should find ways to re-engage in closely coordinated policy-based aid modalities. In addition, a coordinated and carefully managed exit-strategy should be in place before an exit from similar modalities is considered. Particularly a continued framework of mutual accountability and a high-level political dialogue should be aspired to maintain pressure and oversight that is required to achieve good governance.

Keywords: budget support, domestic accountability, public financial management and budget transparency, Sub-Sahara Africa

Procedia PDF Downloads 155
221 Magnetic Single-Walled Carbon Nanotubes (SWCNTs) as Novel Theranostic Nanocarriers: Enhanced Targeting and Noninvasive MRI Tracking

Authors: Achraf Al Faraj, Asma Sultana Shaik, Baraa Al Sayed

Abstract:

Specific and effective targeting of drug delivery systems (DDS) to cancerous sites remains a major challenge for a better diagnostic and therapy. Recently, SWCNTs with their unique physicochemical properties and the ability to cross the cell membrane show promising in the biomedical field. The purpose of this study was first to develop a biocompatible iron oxide tagged SWCNTs as diagnostic nanoprobes to allow their noninvasive detection using MRI and their preferential targeting in a breast cancer murine model by placing an optimized flexible magnet over the tumor site. Magnetic targeting was associated to specific antibody-conjugated SWCNTs active targeting. The therapeutic efficacy of doxorubicin-conjugated SWCNTs was assessed, and the superiority of diffusion-weighted (DW-) MRI as sensitive imaging biomarker was investigated. Short Polyvinylpyrrolidone (PVP) stabilized water soluble SWCNTs were first developed, tagged with iron oxide nanoparticles and conjugated with Endoglin/CD105 monoclonal antibodies. They were then conjugated with doxorubicin drugs. SWCNTs conjugates were extensively characterized using TEM, UV-Vis spectrophotometer, dynamic light scattering (DLS) zeta potential analysis and electron spin resonance (ESR) spectroscopy. Their MR relaxivities (i.e. r1 and r2*) were measured at 4.7T and their iron content and metal impurities quantified using ICP-MS. SWCNTs biocompatibility and drug efficacy were then evaluated both in vitro and in vivo using a set of immunological assays. Luciferase enhanced bioluminescence 4T1 mouse mammary tumor cells (4T1-Luc2) were injected into the right inguinal mammary fat pad of Balb/c mice. Tumor bearing mice received either free doxorubicin (DOX) drug or SWCNTs with or without either DOX or iron oxide nanoparticles. A multi-pole 10x10mm high-energy flexible magnet was maintained over the tumor site during 2 hours post-injections and their properties and polarity were optimized to allow enhanced magnetic targeting of SWCNTs toward the primary tumor site. Tumor volume was quantified during the follow-up investigation study using a fast spin echo MRI sequence. In order to detect the homing of SWCNTs to the main tumor site, susceptibility-weighted multi-gradient echo (MGE) sequence was used to generate T2* maps. Apparent diffusion coefficient (ADC) measurements were also performed as a sensitive imaging biomarker providing early and better assessment of disease treatment. At several times post-SWCNT injection, histological analysis were performed on tumor extracts and iron-loaded SWCNT were quantified using ICP-MS in tumor sites, liver, spleen, kidneys, and lung. The optimized multi-poles magnet revealed an enhanced targeting of magnetic SWCNTs to the primary tumor site, which was found to be much higher than the active targeting achieved using antibody-conjugated SWCNTs. Iron-loading allowed their sensitive noninvasive tracking after intravenous administration using MRI. The active targeting of doxorubicin through magnetic antibody-conjugated SWCNTs nanoprobes was found to considerably decrease the primary tumor site and may have inhibited the development of metastasis in the tumor-bearing mice lung. ADC measurements in DW-MRI were found to significantly increase in a time-dependent manner after the injection of DOX-conjugated SWCNTs complexes.

Keywords: single-walled carbon nanotubes, nanomedicine, magnetic resonance imaging, cancer diagnosis and therapy

Procedia PDF Downloads 329
220 Neutrophil-to-Lymphocyte Ratio: A Predictor of Cardiometabolic Complications in Morbid Obese Girls

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is a low-grade inflammatory state. Childhood obesity is a multisystem disease, which is associated with a number of complications as well as potentially negative consequences. Gender is an important universal risk factor for many diseases. Hematological indices differ significantly by gender. This should be considered during the evaluation of obese children. The aim of this study is to detect hematologic indices that differ by gender in morbid obese (MO) children. A total of 134 MO children took part in this study. The parents filled an informed consent form and the approval from the Ethics Committee of Namik Kemal University was obtained. Subjects were divided into two groups based on their genders (64 females aged 10.2±3.1 years and 70 males aged 9.8±2.2 years; p ≥ 0.05). Waist-to-hip as well as head-to-neck ratios and body mass index (BMI) values were calculated. The children, whose WHO BMI-for age and sex percentile values were > 99 percentile, were defined as MO. Hematological parameters [haemoglobin, hematocrit, erythrocyte count, mean corpuscular volume, mean corpuscular haemoglobin, mean corpuscular haemoglobin concentration, red blood cell distribution width, leukocyte count, neutrophil %, lymphocyte %, monocyte %, eosinophil %, basophil %, platelet count, platelet distribution width, mean platelet volume] were determined by the automatic hematology analyzer. SPSS was used for statistical analyses. P ≤ 0.05 was the degree for statistical significance. The groups included children having mean±SD value of BMI as 26.9±3.4 kg/m2 for males and 27.7±4.4 kg/m2 for females (p ≥ 0.05). There was no significant difference between ages of females and males (p ≥ 0.05). Males had significantly increased waist-to-hip ratios (0.95±0.08 vs 0.91±0.08; p=0.005) and mean corpuscular hemoglobin concentration values (33.6±0.92 vs 33.1±0.83; p=0.001) compared to those of females. Significantly elevated neutrophil (4.69±1.59 vs 4.02±1.42; p=0.011) and neutrophil-to-lymphocyte ratios (1.70±0.71 vs 1.39±0.48; p=0.004) were detected in females. There was no statistically significant difference between groups in terms of C-reactive protein values (p ≥ 0.05). Adipose tissue plays important roles during the development of obesity and associated diseases such as metabolic syndrom and cardiovascular diseases (CVDs). These diseases may cause changes in complete blood cell count parameters. These alterations are even more important during childhood. Significant gender effects on the changes of neutrophils, one of the white blood cell subsets, were observed. The findings of the study demonstrate the importance of considering gender in clinical studies. The males and females may have distinct leukocyte-trafficking profiles in inflammation. Female children had more circulating neutrophils, which may be the indicator of an increased risk of CVDs, than male children within this age range during the late stage of obesity. In recent years, females represent about half of deaths from CVDs; therefore, our findings may be the indicator of the increasing tendency of this risk in females starting from childhood.

Keywords: children, gender, morbid obesity, neutrophil-to-lymphocyte ratio

Procedia PDF Downloads 273
219 Antioxidant Potential of Sunflower Seed Cake Extract in Stabilization of Soybean Oil

Authors: Ivanor Zardo, Fernanda Walper Da Cunha, Júlia Sarkis, Ligia Damasceno Ferreira Marczak

Abstract:

Lipid oxidation is one of the most important deteriorating processes in oil industry, resulting in the losses of nutritional value of oils as well as changes in color, flavor and other physiological properties. Autoxidation of lipids occurs naturally between molecular oxygen and the unsaturation of fatty acids, forming fat-free radicals, peroxide free radicals and hydroperoxides. In order to avoid the lipid oxidation in vegetable oils, synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertiary butyl hydro-quinone (TBHQ) are commonly used. However, the use of synthetic antioxidants has been associated with several health side effects and toxicity. The use of natural antioxidants as stabilizers of vegetable oils is being suggested as a sustainable alternative to synthetic antioxidants. The alternative that has been studied is the use of natural extracts obtained mainly from fruits, vegetables and seeds, which have a well-known antioxidant activity related mainly to the presence of phenolic compounds. The sunflower seed cake is rich in phenolic compounds (1 4% of the total mass), being the chlorogenic acid the major constituent. The aim of this study was to evaluate the in vitro application of the phenolic extract obtained from the sunflower seed cake as a retarder of the lipid oxidation reaction in soybean oil and to compare the results with a synthetic antioxidant. For this, the soybean oil, provided from the industry without any addition of antioxidants, was subjected to an accelerated storage test for 17 days at 65 °C. Six samples with different treatments were submitted to the test: control sample, without any addition of antioxidants; 100 ppm of synthetic antioxidant BHT; mixture of 50 ppm of BHT and 50 ppm of phenolic compounds; and 100, 500 and 1200 ppm of phenolic compounds. The phenolic compounds concentration in the extract was expressed in gallic acid equivalents. To evaluate the oxidative changes of the samples, aliquots were collected after 0, 3, 6, 10 and 17 days and analyzed for the peroxide, diene and triene conjugate values. The soybean oil sample initially had a peroxide content of 2.01 ± 0.27 meq of oxygen/kg of oil. On the third day of the treatment, only the samples treated with 100, 500 and 1200 ppm of phenolic compounds showed a considerable oxidation retard compared to the control sample. On the sixth day of the treatment, the samples presented a considerable increase in the peroxide value (higher than 13.57 meq/kg), and the higher the concentration of phenolic compounds, the lower the peroxide value verified. From the tenth day on, the samples had a very high peroxide value (higher than 55.39 meq/kg), where only the sample containing 1200 ppm of phenolic compounds presented significant oxidation retard. The samples containing the phenolic extract were more efficient to avoid the formation of the primary oxidation products, indicating effectiveness to retard the reaction. Similar results were observed for dienes and trienes. Based on the results, phenolic compounds, especially chlorogenic acid (the major phenolic compound of sunflower seed cake), can be considered as a potential partial or even total substitute for synthetic antioxidants.

Keywords: chlorogenic acid, natural antioxidant, vegetables oil deterioration, waste valorization

Procedia PDF Downloads 264