Search results for: synthetic dataset
320 Stilbenes as Sustainable Antimicrobial Compounds to Control Vitis Vinifera Diseases
Authors: David Taillis, Oussama Becissa, Julien Gabaston, Jean-Michel Merillon, Tristan Richard, Stephanie Cluzet
Abstract:
Nowadays, there is a strong pressure to reduce the phytosanitary inputs of synthetic chemistry in vineyards. It is, therefore, necessary to find viable alternatives in order to protect the vine against its major diseases. For this purpose, we suggest the use of a plant extract enriched in antimicrobial compounds. Being produced from vine trunks and roots, which are co-products of wine production, the extract produced is part of a circular economy. The antimicrobial molecules present in this plant material are polyphenols and, more particularly, stilbenes, which are derived from a common base, the resveratrol unit, and that are well known vine phytoalexins. The stilbenoids were extracted from trunks and roots (30/70, w/w) by a double extraction with ethyl acetate followed by enrichment by liquid-liquid extraction. The produced extract was characterized by UHPLC-MS, then its antimicrobial activities were tested on Plasmopara viticola and Botrytis cinerea in the laboratory and/or in greenhouse and in vineyard. The major compounds were purified, and their antimicrobial activity was evaluated on B. cinerea. Moreover, after its spraying, the effect of the stilbene extract on the plant defence status was evaluated by analysis of defence gene expression. UHPLC-MS analysis revealed that the extract contains 50% stilbenes with resveratrol, ε-viniferin and r-viniferin as major compounds. The extract showed antimicrobial activities on P. viticola with IC₅₀ and IC₁₀₀ respectively of 90 and 300 mg/L in the laboratory. In addition, it inhibited 40% of downy mildew development in greenhouse. However, probably because of the sensitivity of stilbenes to the environment, such as UV degradation, no activity has been observed in vineyard towards P. viticola development. For B. cinerea, the extract IC50 was 123 mg/L, with resveratrol and ε-viniferin being the most active stilbenes (IC₅₀ of 88 and 142 mg/L, respectively). The analysis of the expression of defence genes revealed that the extract can induce the expression of some defence genes 24, 48, and 72 hours after treatment, meaning that the extract has a defence-stimulating effect at least for the first three days after treatment. In conclusion, we produced a plant extract enriched in stilbenes with antimicrobial properties against two major grapevine pathogenic agents P. viticola and B. cinerea. In addition, we showed that this extract displayed eliciting activity of plant defences. This extract can therefore represent, after formulation development, a viable eco-friendly alternative for vineyard protection. Subsequently, the effect of the stilbenoid extract on primary metabolism will be evaluated by quantitative NMR.Keywords: antimicrobial, bioprotection, grapevine, Plasmopara viticola, stilbene
Procedia PDF Downloads 218319 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data
Authors: Imran Aziz Tunio
Abstract:
Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES
Procedia PDF Downloads 110318 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction
Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera
Abstract:
E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling
Procedia PDF Downloads 72317 Personalized Infectious Disease Risk Prediction System: A Knowledge Model
Authors: Retno A. Vinarti, Lucy M. Hederman
Abstract:
This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk
Procedia PDF Downloads 242316 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study
Authors: Jitendra Pratap, Jonathan Sivyer
Abstract:
Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.Keywords: CT, iodine density, spectral, dual-energy
Procedia PDF Downloads 119315 Multiple Insecticide Resistance in Culex quinquefasciatus Say, from Siliguri, West Bengal, India
Authors: Minu Bharati, Priyanka Rai, Satarupa Dutta, Dhiraj Saha
Abstract:
Culex quinquefasciatus Say, is a mosquito of immense public health concern due to its role in transmission of filariasis, which is an endemic disease in 20 states and union territories of India, putting about 600 million people at the risk of infection. The main strategies to control filaria in India include anti-larval measures in urban areas, Indoor Residual Spray (IRS) in rural areas and mass diethylcarbamazine citrate (DEC) administration. Larval destruction measures and IRS are done with the use of insecticides. In this study, Susceptibility/ Resistance to insecticides were assessed in Culex quinquefasciatus mosquitoes collected from eight densely populated areas of Siliguri subdivision, which has a high rate of filarial infection. To unveil the insecticide susceptibility status of Culex quinquefasciatus, bioassays were performed on field-caught mosquitoes against two major groups of insecticides, i.e. Synthetic Pyrethroids (SPs): 0.05% deltamethrin and 0.05% lambda-cyhalothrin and Organophosphates (OPs): 5% malathion and temephos using World Health Organisation (WHO) discriminating doses. The knockdown rates and knockdown times (KDT50) were also noted against deltamethrin, lambda-cyhalothrin and malathion. Also, activities of major detoxifying enzymes, i.e. α-carboxylesterases, β-carboxylesterases and cytochrome P450 (CYP450) monooxygenases were determined to find the involvement of biochemical mechanisms in resistance phenomenon (if any). The results obtained showed that, majority of the mosquito populations were moderately to severely resistant against both the SPs and one OP, i.e. temephos. Whereas, most of the populations showed 100% susceptibility to malathion. The knockdown rates and KDT50 in response to above-mentioned insecticides showed significant variation among different populations. Variability in activities of carboxylesterases and CYP450 monooxygenases were also observed with hints of their involvement in contribution towards insecticide resistance in some of the tested populations. It may be concluded that, Culex quinquefasciatus has started developing resistance against deltamethrin, lambda-cyhalothrin and temephos in Siliguri subdivision. Malathion seems to hold the greatest potentiality for control of these mosquitoes in this area as revealed through this study. Adoption of Integrated mosquito management (IMM) strategy should be the prime objective of the concerned authorities to delimit the insecticide resistance phenomenon and filariasis infections.Keywords: Culex quinquefasciatus, detoxifying enzymes, insecticide resistance, knockdown rate
Procedia PDF Downloads 255314 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 77313 Factors Affecting Cesarean Section among Women in Qatar Using Multiple Indicator Cluster Survey Database
Authors: Sahar Elsaleh, Ghada Farhat, Shaikha Al-Derham, Fasih Alam
Abstract:
Background: Cesarean section (CS) delivery is one of the major concerns both in developing and developed countries. The rate of CS deliveries are on the rise globally, and especially in Qatar. Many socio-economic, demographic, clinical and institutional factors play an important role for cesarean sections. This study aims to investigate factors affecting the prevalence of CS among women in Qatar using the UNICEF’s Multiple Indicator Cluster Survey (MICS) 2012 database. Methods: The study has focused on the women’s questionnaire of the MICS, which was successfully distributed to 5699 participants. Following study inclusion and exclusion criteria, a final sample of 761 women aged 19- 49 years who had at least one delivery of giving birth in their lifetime before the survey were included. A number of socio-economic, demographic, clinical and institutional factors, identified through literature review and available in the data, were considered for the analyses. Bivariate and multivariate logistic regression models, along with a multi-level modeling to investigate clustering effect, were undertaken to identify the factors that affect CS prevalence in Qatar. Results: From the bivariate analyses the study has shown that, a number of categorical factors are statistically significantly associated with the dependent variable (CS). When identifying the factors from a multivariate logistic regression, the study found that only three categorical factors -‘age of women’, ‘place at delivery’ and ‘baby weight’ appeared to be significantly affecting the CS among women in Qatar. Although the MICS dataset is based on a cluster survey, an exploratory multi-level analysis did not show any clustering effect, i.e. no significant variation in results at higher level (households), suggesting that all analyses at lower level (individual respondent) are valid without any significant bias in results. Conclusion: The study found a statistically significant association between the dependent variable (CS delivery) and age of women, frequency of TV watching, assistance at birth and place of birth. These results need to be interpreted cautiously; however, it can be used as evidence-base for further research on cesarean section delivery in Qatar.Keywords: cesarean section, factors, multiple indicator cluster survey, MICS database, Qatar
Procedia PDF Downloads 116312 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection
Authors: Ali Hamza
Abstract:
Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network
Procedia PDF Downloads 84311 Radical Scavenging Activity of Protein Extracts from Pulse and Oleaginous Seeds
Authors: Silvia Gastaldello, Maria Grillo, Luca Tassoni, Claudio Maran, Stefano Balbo
Abstract:
Antioxidants are nowadays attractive not only for the countless benefits to the human and animal health, but also for the perspective of use as food preservative instead of synthetic chemical molecules. In this study, the radical scavenging activity of six protein extracts from pulse and oleaginous seeds was evaluated. The selected matrices are Pisum sativum (yellow pea from two different origins), Carthamus tinctorius (safflower), Helianthus annuus (sunflower), Lupinus luteus cv Mister (lupin) and Glycine max (soybean), since they are economically interesting for both human and animal nutrition. The seeds were grinded and proteins extracted from 20mg powder with a specific vegetal-extraction kit. Proteins have been quantified through Bradford protocol and scavenging activity was revealed using DPPH assay, based on radical DPPH (2,2-diphenyl-1-picrylhydrazyl) absorbance decrease in the presence of antioxidants molecules. Different concentrations of the protein extract (1, 5, 10, 50, 100, 500 µg/ml) were mixed with DPPH solution (DPPH 0,004% in ethanol 70% v/v). Ascorbic acid was used as a scavenging activity standard reference, at the same six concentrations of protein extracts, while DPPH solution was used as control. Samples and standard were prepared in triplicate and incubated for 30 minutes in dark at room temperature, the absorbance was read at 517nm (ABS30). Average and standard deviation of absorbance values were calculated for each concentration of samples and standard. Statistical analysis using t-students and p-value were performed to assess the statistical significance of the scavenging activity difference between the samples (or standard) and control (ABSctrl). The percentage of antioxidant activity has been calculated using the formula [(ABSctrl-ABS30)/ABSctrl]*100. The obtained results demonstrate that all matrices showed antioxidant activity. Ascorbic acid, used as standard, exhibits a 96% scavenging activity at the concentration of 500 µg/ml. At the same conditions, sunflower, safflower and yellow peas revealed the highest antioxidant performance among the matrices analyzed, with an activity of 74%, 68% and 70% respectively (p < 0.005). Although lupin and soybean exhibit a lower antioxidant activity compared to the other matrices, they showed a percentage of 46 and 36 respectively. All these data suggest the possibility to use undervalued edible matrices as antioxidants source. However, further studies are necessary to investigate a possible synergic effect of several matrices as well as the impact of industrial processes for a large-scale approach.Keywords: antioxidants, DPPH assay, natural matrices, vegetal proteins
Procedia PDF Downloads 432310 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 490309 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)
Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier
Abstract:
The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance
Procedia PDF Downloads 156308 Infusion of Skills for Undergraduate Scholarship into Teacher Education: Two Case Studies in New York and Florida
Authors: Tunde Szecsi, Janka Szilagyi
Abstract:
Students majoring in education are underrepresented in undergraduate scholarship. To enable and encourage teacher candidates to engage in scholarly activities, it is essential to infuse skills such as problem-solving, critical thinking, oral and written communication, collaboration and the utilization of information literacy, into courses in teacher preparation programs. In this empirical study, we examined two teacher education programs – one in New York State and one in Florida – in terms of the approaches of the course-based infusion of skills for undergraduate research, and the effectiveness of this infusion. First, course-related documents such as syllabi, assignment descriptions, and course activities were reviewed and analyzed. The goal of the document analysis was to identify and describe the targeted skills, and the pedagogical approaches and strategies for promoting research skills in teacher candidates. Next, a selection of teacher candidates’ scholarly products from the institution in Florida was used as a data set to examine teacher candidates’ skill development in the context of the identified assignments. This dataset was analyzed both quantitatively and qualitatively to describe the changes that occurred in teacher candidates’ critical thinking, communication, and information literacy skills, and to uncover patterns in the skill development at the two institutions. Descriptive statistics were calculated to explore the changes in these skills of teacher candidates over a period of three years. The findings based on data from the teacher education program in Florida indicated a steady gain in written communication and critical thinking and a modest increase in informational literacy. At the institution in New York, candidates’ submission and success rates on the edTPA, a New York State Teacher Certification exam, was used as a measure of scholarly skills. Overall, although different approaches were used for infusing the development of scholarly skills in the courses, the results suggest that a holistic and well-orchestrated infusion of the skills into most courses in the teacher education program might result in steadily developing scholarly skills. These results offered essential implications for teacher education programs in terms of further improvements in teacher candidates’ skills for engaging in undergraduate research and scholarship. In this presentation, our purpose is to showcase two approaches developed by two teacher education programs to demonstrate how diverse approaches toward the promotion of undergraduate scholarship activities are responsive to the context of the teacher preparation programs.Keywords: critical thinking, pedagogical strategies, teacher education, undergraduate student research
Procedia PDF Downloads 161307 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant
Authors: Elenice Maria Schons Silva, Andre Carlos Silva
Abstract:
The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.Keywords: collectors, depressants, flotation, mineral processing
Procedia PDF Downloads 152306 CO₂ Conversion by Low-Temperature Fischer-Tropsch
Authors: Pauline Bredy, Yves Schuurman, David Farrusseng
Abstract:
To fulfill climate objectives, the production of synthetic e-fuels using CO₂ as a raw material appears as part of the solution. In particular, Power-to-Liquid (PtL) concept combines CO₂ with hydrogen supplied from water electrolysis, powered by renewable sources, which is currently gaining interest as it allows the production of sustainable fossil-free liquid fuels. The proposed process discussed here is an upgrading of the well-known Fischer-Tropsch synthesis. The concept deals with two cascade reactions in one pot, with first the conversion of CO₂ into CO via the reverse water gas shift (RWGS) reaction, which is then followed by the Fischer-Tropsch Synthesis (FTS). Instead of using a Fe-based catalyst, which can carry out both reactions, we have chosen the strategy to decouple the two functions (RWGS and FT) on two different catalysts within the same reactor. The FTS shall shift the equilibrium of the RWGS reaction (which alone would be limited to 15-20% of conversion at 250°C) by converting the CO into hydrocarbons. This strategy shall enable optimization of the catalyst pair and thus lower the temperature of the reaction thanks to the equilibrium shift to gain selectivity in the liquid fraction. The challenge lies in maximizing the activity of the RWGS catalyst but also in the ability of the FT catalyst to be highly selective. Methane production is the main concern as the energetic barrier of CH₄ formation is generally lower than that of the RWGS reaction, so the goal will be to minimize methane selectivity. Here we report the study of different combinations of copper-based RWGS catalysts with different cobalt-based FTS catalysts. We investigated their behaviors under mild process conditions by the use of high-throughput experimentation. Our results show that at 250°C and 20 bars, Cobalt catalysts mainly act as methanation catalysts. Indeed, CH₄ selectivity never drops under 80% despite the addition of various protomers (Nb, K, Pt, Cu) on the catalyst and its coupling with active RWGS catalysts. However, we show that the activity of the RWGS catalyst has an impact and can lead to longer hydrocarbons chains selectivities (C₂⁺) of about 10%. We studied the influence of the reduction temperature on the activity and selectivity of the tandem catalyst system. Similar selectivity and conversion were obtained at reduction temperatures between 250-400°C. This leads to the question of the active phase of the cobalt catalysts, which is currently investigated by magnetic measurements and DRIFTS. Thus, in coupling it with a more selective FT catalyst, better results are expected. This was achieved using a cobalt/iron FTS catalyst. The CH₄ selectivity dropped to 62% at 265°C, 20 bars, and a GHSV of 2500ml/h/gcat. We propose that the conditions used for the cobalt catalysts could have generated this methanation because these catalysts are known to have their best performance around 210°C in classical FTS, whereas the iron catalysts are more flexible but are also known to have an RWGS activity.Keywords: cobalt-copper catalytic systems, CO₂-hydrogenation, Fischer-Tropsch synthesis, hydrocarbons, low-temperature process
Procedia PDF Downloads 57305 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 23304 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 109303 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74302 MicroRNA-1246 Expression Associated with Resistance to Oncogenic BRAF Inhibitors in Mutant BRAF Melanoma Cells
Authors: Jae-Hyeon Kim, Michael Lee
Abstract:
Intrinsic and acquired resistance limits the therapeutic benefits of oncogenic BRAF inhibitors in melanoma. MicroRNAs (miRNA) regulate the expression of target mRNAs by repressing their translation. Thus, we investigated miRNA expression patterns in melanoma cell lines to identify candidate biomarkers for acquired resistance to BRAF inhibitor. Here, we used Affymetrix miRNA V3.0 microarray profiling platform to compare miRNA expression levels in three cell lines containing BRAF inhibitor-sensitive A375P BRAF V600E cells, their BRAF inhibitor-resistant counterparts (A375P/Mdr), and SK-MEL-2 BRAF-WT cells with intrinsic resistance to BRAF inhibitor. The miRNAs with at least a two-fold change in expression between BRAF inhibitor-sensitive and –resistant cell lines, were identified as differentially expressed. Averaged intensity measurements identified 138 and 217 miRNAs that were differentially expressed by 2 fold or more between: 1) A375P and A375P/Mdr; 2) A375P and SK-MEL-2, respectively. The hierarchical clustering revealed differences in miRNA expression profiles between BRAF inhibitor-sensitive and –resistant cell lines for miRNAs involved in intrinsic and acquired resistance to BRAF inhibitor. In particular, 43 miRNAs were identified whose expression was consistently altered in two BRAF inhibitor-resistant cell lines, regardless of intrinsic and acquired resistance. Twenty five miRNAs were consistently upregulated and 18 downregulated more than 2-fold. Although some discrepancies were detected when miRNA microarray data were compared with qPCR-measured expression levels, qRT-PCR for five miRNAs (miR-3617, miR-92a1, miR-1246, miR-1936-3p, and miR-17-3p) results showed excellent agreement with microarray experiments. To further investigate cellular functions of miRNAs, we examined effects on cell proliferation. Synthetic oligonucleotide miRNA mimics were transfected into three cell lines, and proliferation was quantified using a colorimetric assay. Of the 5 miRNAs tested, only miR-1246 altered cell proliferation of A375P/Mdr cells. The transfection of miR-1246 mimic strongly conferred PLX-4720 resistance to A375P/Mdr cells, implying that miR-1246 upregulation confers acquired resistance to BRAF inhibition. We also found that PLX-4720 caused much greater G2/M arrest in A375P/Mdr cells transfected with miR-1246mimic than that seen in scrambled RNA-transfected cells. Additionally, miR-1246 mimic partially caused a resistance to autophagy induction by PLX-4720. These results indicate that autophagy does play an essential death-promoting role inPLX-4720-induced cell death. Taken together, these results suggest that miRNA expression profiling in melanoma cells can provide valuable information for a network of BRAF inhibitor resistance-associated miRNAs.Keywords: microRNA, BRAF inhibitor, drug resistance, autophagy
Procedia PDF Downloads 325301 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 301300 Analyze the Properties of Different Surgical Sutures
Authors: Doaa H. Elgohary, Tamer F. Khalifa, Mona M. Salem, M. A. Saad, Ehab Haider Sherazy
Abstract:
Textiles have conquered new areas over the past three decades, including agriculture, transportation, filtration, military, and medicine. The use of textiles in the medical field has increased significantly in recent years and covers almost everything. Medical textiles represent a huge market as they are widely used not only in hospitals, hygiene, and healthcare but also in hotels and other environments where hygiene is required. However, not all fibers are suitable for the manufacture of medical textile products. Some special properties are required for the manufactured materials, e.g. Strength, elasticity, spinnability, etc. In addition to the usual properties of medical fibers, non-toxicity, sterilizability, biocompatibility, biodegradability, good absorbability, softness, and freedom from additives, etc., desirable properties include impurities. Stitching is one of the most common practices in the medical field. as it is a biomaterial device, either natural or synthetic, used to connect blood vessels and connect tissues. In addition to being very strong, suture material should easily dissolve in bodily fluids and lose strength as the tissue gains strength. In this work, a study to select the most used materials for sutures, it was found that silk, VICRYL and polypropylene were the most used materials in varying numbers. The research involved the analysis of 36 samples from three different materials (mostly commonly used), the tests were carried out on 36 imported samples for four different companies. Each company supplied three different materials (silk, VICRYL and polypropylene) with three different gauges (4, 3.5 and 3 metric). The results of the study were tabulated, presented, and discussed. Practical statistical science serves to support the practical analysis of experimental work products and the various relationships between variables to achieve the best sampling performance with the functional purpose generated for it. Analysis of the imported sutures shows that VICRYL sutures had the highest tensile strength, toughness, knot tensile strength and knot toughness, followed by polypropylene and silk. As yarn counts, weight and diameter increase, its tensile strength and toughness increase while its elongation and knot tension decrease. The multifilament yarn construction (silk and VICRYL) scores higher compared to the monofilament construction (polypropylene), resulting in increases in tenacity, toughness, knot tensile strength and knot toughness.Keywords: biodegradable yarns, braided sutures, irritation, knot tying, medical textiles, surgical sutures, wound healing
Procedia PDF Downloads 59299 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling
Authors: Havva Tutar Kahraman, Erol Pehlivan
Abstract:
In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.Keywords: adsorption, anion exchange resin, chromium, kinetics
Procedia PDF Downloads 260298 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography
Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami
Abstract:
Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction
Procedia PDF Downloads 226297 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling
Authors: Ahmad Odeh, Ahmad Jrade
Abstract:
Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.Keywords: building information modelling, energy, life cycle analysis, sustainablity
Procedia PDF Downloads 269296 On the Semantics and Pragmatics of 'Be Able To': Modality and Actualisation
Authors: Benoît Leclercq, Ilse Depraetere
Abstract:
The goal of this presentation is to shed new light on the semantics and pragmatics of be able to. It presents the results of a corpus analysis based on data from the BNC (British National Corpus), and discusses these results in light of a specific stance on the semantics-pragmatics interface taking into account recent developments. Be able to is often discussed in relation to can and could, all of which can be used to express ability. Such an onomasiological approach often results in the identification of usage constraints for each expression. In the case of be able to, it is the formal properties of the modal expression (unlike can and could, be able to has non-finite forms) that are in the foreground, and the modal expression is described as the verb that conveys future ability. Be able to is also argued to expressed actualised ability in the past (I was able/could to open the door). This presentation aims to provide a more accurate pragmatic-semantic profile of be able to, based on extensive data analysis and one that is embedded in a very explicit view on the semantics-pragmatics interface. A random sample of 3000 examples (1000 for each modal verb) extracted from the BNC was analysed to account for the following issues. First, the challenge is to identify the exact semantic range of be able to. The results show that, contrary to general assumption, be able to does not only express ability but it shares most of the root meanings usually associated with the possibility modals can and could. The data reveal that what is called opportunity is, in fact, the most frequent meaning of be able to. Second, attention will be given to the notion of actualisation. It is commonly argued that be able to is the preferred form when the residue actualises: (1) The only reason he was able to do that was because of the restriction (BNC, spoken) (2) It is only through my imaginative shuffling of the aces that we are able to stay ahead of the pack. (BNC, written) Although this notion has been studied in detail within formal semantic approaches, empirical data is crucially lacking and it is unclear whether actualisation constitutes a conventional (and distinguishing) property of be able to. The empirical analysis provides solid evidence that actualisation is indeed a conventional feature of the modal. Furthermore, the dataset reveals that be able to expresses actualised 'opportunities' and not actualised 'abilities'. In the final part of this paper, attention will be given to the theoretical implications of the empirical findings, and in particular to the following paradox: how can the same expression encode both modal meaning (non-factual) and actualisation (factual)? It will be argued that this largely depends on one's conception of the semantics-pragmatics interface, and that this need not be an issue when actualisation (unlike modality) is analysed as a generalised conversational implicature and thus is considered part of the conventional pragmatic layer of be able to.Keywords: Actualisation, Modality, Pragmatics, Semantics
Procedia PDF Downloads 131295 Biopolymers: A Solution for Replacing Polyethylene in Food Packaging
Authors: Sonia Amariei, Ionut Avramia, Florin Ursachi, Ancuta Chetrariu, Ancuta Petraru
Abstract:
The food industry is one of the major generators of plastic waste derived from conventional synthetic petroleum-based polymers, which are non-biodegradable, used especially for packaging. These packaging materials, after the food is consumed, accumulate serious environmental concerns due to the materials but also to the organic residues that adhere to them. It is the concern of specialists, researchers to eliminate problems related to conventional materials that are not biodegradable or unnecessary plastic and replace them with biodegradable and edible materials, supporting the common effort to protect the environment. Even though environmental and health concerns will cause more consumers to switch to a plant-based diet, most people will continue to add more meat to their diet. The paper presents the possibility of replacing the polyethylene packaging from the surface of the trays for meat preparations with biodegradable packaging obtained from biopolymers. During the storage of meat products may occur deterioration by lipids oxidation and microbial spoilage, as well as the modification of the organoleptic characteristics. For this reason, different compositions of polymer mixtures and film conditions for obtaining must be studied to choose the best packaging material to achieve food safety. The compositions proposed for packaging are obtained from alginate, agar, starch, and glycerol as plasticizers. The tensile strength, elasticity, modulus of elasticity, thickness, density, microscopic images of the samples, roughness, opacity, humidity, water activity, the amount of water transferred as well as the speed of water transfer through these packaging materials were analyzed. A number of 28 samples with various compositions were analyzed, and the results showed that the sample with the highest values for hardness, density, and opacity, as well as the smallest water vapor permeability, of 1.2903E-4 ± 4.79E-6, has the ratio of components as alginate: agar: glycerol (3:1.25:0.75). The water activity of the analyzed films varied between 0.2886 and 0.3428 (aw< 0.6), demonstrating that all the compositions ensure the preservation of the products in the absence of microorganisms. All the determined parameters allow the appreciation of the quality of the packaging films in terms of mechanical resistance, its protection against the influence of light, the transfer of water through the packaging. Acknowledgments: This work was supported by a grant of the Ministry of Research, Innovation, and Digitization, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-PED-2019-3863, within PNCDI III.Keywords: meat products, alginate, agar, starch, glycerol
Procedia PDF Downloads 167294 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School
Authors: Martín Pratto Burgos
Abstract:
The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.Keywords: machine-learning, engineering, university, education, computational models
Procedia PDF Downloads 94293 Ethnic Xenophobia as Symbolic Politics: An Explanation of Anti-Migrant Activity from Brussels to Beirut
Authors: Annamarie Rannou, Horace Bartilow
Abstract:
Global concerns about xenophobic activity are on the rise across developed and developing countries. And yet, social science scholarship has almost exclusively examined xenophobia as a prejudice of advanced western nations. This research argues that the fields of study related to xenophobia must be re-conceptualized within a framework of ethnicity in order to level the playing field for cross-regional inquiry. This study develops a new concept of ethnic xenophobia and integrates existing explanations of anti-migrant expression into theories of ethnic threat. We argue specifically that political elites convert economic, political, and social threats at the national level into ethnic xenophobic activity in order to gain or maintain political advantage among their native selectorate. We expand on Stuart Kaufman’s theory of symbolic politics to underscore the methods of mobilization used against migrants and the power of elite discourse in moments of national crises. An original dataset is used to examine over 35,000 cases of ethnic xenophobic activity targeting refugees. Wordscores software is used to develop a unique measure of anti-migrant elite rhetoric which captures the symbolic discourse of elites in their mobilization of ethnic xenophobic activism. We use a Structural Equation Model (SEM) to test the causal pathways of the theory across seventy-two developed and developing countries from 1990 to 2016. A framework of Most Different Systems Design (MDSD) is also applied to two pairs of developed-developing country cases, including Kenya and the Netherlands and Lebanon and the United States. This study sheds tremendous light on an underrepresented area of comparative research in migration studies. It shows that the causal elements of anti-migrant activity are far more similar than existing research suggests which has major implications for policy makers, practitioners, and academics in fields of migration protection and advocacy. It speaks directly to the mobilization of myths surrounding refugees, in particular, and the nationalization of narratives of migration that may be neutralized by the development of deeper associational relationships between natives and migrants.Keywords: refugees, ethnicity, symbolic politics, elites, migration, comparative politics
Procedia PDF Downloads 145292 Investigating the Effect of Plant Root Exudates and of Saponin on Polycyclic Aromatic Hydrocarbons Solubilization in Brownfield Contaminated Soils
Authors: Marie Davin, Marie-Laure Fauconnier, Gilles Colinet
Abstract:
In Wallonia, there are 6,000 estimated brownfields (rising to over 3.5 million in Europe) that require remediation. Polycyclic Aromatic Hydrocarbons (PAHs) are a class of recalcitrant carcinogenic/mutagenic organic compounds of major concern as they accumulate in the environment and represent 17% of all encountered pollutants. As an alternative to environmentally aggressive, expensive and often disruptive soil remediation strategies, a lot of research has been directed to developing techniques targeting organic pollutants. The following experiment, based on the observation that PAHs soil content decreases in the presence of plants, aimed at improving our understanding of the underlying mechanisms involved in phytoremediation. It focusses on plant root exudates and whether they improve PAHs solubilization, which would make them more available for bioremediation by soil microorganisms. The effect of saponin, a natural surfactant found in some plant roots such as members of the Fabaceae family, on PAHs solubilization was also investigated as part of the implementation of the experimental protocol. The experiments were conducted on soil collected from a brownfield in Saint-Ghislain (Belgium) and presenting weathered PAHs contamination. Samples of soil were extracted with different solutions containing either plant root exudates or commercial saponin. Extracted PAHs were determined in the different aqueous solutions using High-Performance Liquid Chromatography and Fluorimetric Detection (HPLC-FLD). Both root exudates of alfalfa (Medicago sativa L.) or red clover (Trifolium pratense L.) and commercial saponin were tested in different concentrations. Distilled water was used as a control. First of all, results show that PAHs are more extracted using saponin solutions than distilled water and that the amounts generally rise with the saponin concentration. However, the amount of each extracted compound diminishes as its molecular weight rises. Also, it appears that passed a certain surfactant concentration, PAHs are less extracted. This suggests that saponin might be investigated as a washing agent in polluted soil remediation techniques, either for ex-situ or in-situ treatments, as an alternative to synthetic surfactants. On the other hand, preliminary results on experiments using plant root exudates also show differences in PAHs solubilization compared to the control solution. Further results will allow discussion as to whether or not there are differences according to the exudates provenance and concentrations.Keywords: brownfield, Medicago sativa, phytoremediation, polycyclic aromatic hydrocarbons, root exudates, saponin, solubilization, Trifolium pratense
Procedia PDF Downloads 253291 Diselenide-Linked Redox Stimuli-Responsive Methoxy Poly(Ethylene Glycol)-b-Poly(Lactide-Co-Glycolide) Micelles for the Delivery of Doxorubicin in Cancer Cells
Authors: Yihenew Simegniew Birhan, Hsieh Chih Tsai
Abstract:
The recent advancements in synthetic chemistry and nanotechnology fostered the development of different nanocarriers for enhanced intracellular delivery of pharmaceutical agents to tumor cells. Polymeric micelles (PMs), characterized by small size, appreciable drug loading capacity (DLC), better accumulation in tumor tissue via enhanced permeability and retention (EPR) effect, and the ability to avoid detection and subsequent clearance by the mononuclear phagocyte (MNP) system, are convenient to improve the poor solubility, slow absorption and non-selective biodistribution of payloads embedded in their hydrophobic cores and hence, enhance the therapeutic efficacy of chemotherapeutic agents. Recently, redox-responsive polymeric micelles have gained significant attention for the delivery and controlled release of anticancer drugs in tumor cells. In this study, we synthesized redox-responsive diselenide bond containing amphiphilic polymer, Bi(mPEG-PLGA)-Se₂ from mPEG-PLGA, and 3,3'-diselanediyldipropanoic acid (DSeDPA) using DCC/DMAP as coupling agents. The successful synthesis of the copolymers was verified by different spectroscopic techniques. Above the critical micelle concentration, the amphiphilic copolymer, Bi(mPEG-PLGA)-Se₂, self-assembled into stable micelles. The DLS data indicated that the hydrodynamic diameter of the micelles (123.9 ± 0.85 nm) was suitable for extravasation into the tumor cells through the EPR effect. The drug loading content (DLC) and encapsulation efficiency (EE) of DOX-loaded micelles were found to be 6.61 wt% and 54.9%, respectively. The DOX-loaded micelles showed initial burst release accompanied by sustained release trend where 73.94% and 69.54% of encapsulated DOX was released upon treatment with 6mM GSH and 0.1% H₂O₂, respectively. The biocompatible nature of Bi(mPEG-PLGA)-Se₂ copolymer was confirmed by the cell viability study. In addition, the DOX-loaded micelles exhibited significant inhibition against HeLa cells (44.46%), at a maximum dose of 7.5 µg/mL. The fluorescent microscope images of HeLa cells treated with 3 µg/mL (equivalent DOX concentration) revealed efficient internalization and accumulation of DOX-loaded Bi(mPEG-PLGA)-Se₂ micelles in the cytosol of cancer cells. In conclusion, the intelligent, biocompatible, and the redox stimuli-responsive behavior of Bi(mPEG-PLGA)-Se₂ copolymer marked the potential applications of diselenide-linked mPEG-PLGA micelles for the delivery and on-demand release of chemotherapeutic agents in cancer cells.Keywords: anticancer drug delivery, diselenide bond, polymeric micelles, redox-responsive
Procedia PDF Downloads 110