Search results for: initial contact
514 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Authors: Mai Abdul Latif, Yuntian Feng
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear
Procedia PDF Downloads 225513 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 145512 Exploration of in-situ Product Extraction to Increase Triterpenoid Production in Saccharomyces Cerevisiae
Authors: Mariam Dianat Sabet Gilani, Lars M. Blank, Birgitta E. Ebert
Abstract:
Plant-derived lupane-type, pentacyclic triterpenoids are biologically active compounds that are highly interesting for applications in medical, pharmaceutical, and cosmetic industries. Due to the low abundance of these valuable compounds in their natural sources, and the environmentally harmful downstream process, alternative production methods, such as microbial cell factories, are investigated. Engineered Saccharomyces cerevisiae strains, harboring the heterologous genes for betulinic acid synthesis, can produce up to 2 g L-1 triterpenoids, showing high potential for large-scale production of triterpenoids. One limitation of the microbial synthesis is the intracellular product accumulation. It not only makes cell disruption a necessary step in the downstream processing but also limits productivity and product yield per cell. To overcome these restrictions, the aim of this study is to develop an in-situ extraction method, which extracts triterpenoids into a second organic phase. Such a continuous or sequential product removal from the biomass keeps the cells in an active state and enables extended production time or biomass recycling. After screening of twelve different solvents, selected based on product solubility, biocompatibility, as well as environmental and health impact, isopropyl myristate (IPM) was chosen as a suitable solvent for in-situ product removal from S. cerevisiae. Impedance-based single-cell analysis and off-gas measurement of carbon dioxide emission showed that cell viability and physiology were not affected by the presence of IPM. Initial experiments demonstrated that after the addition of 20 vol % IPM to cultures in the stationary phase, 40 % of the total produced triterpenoids were extracted from the cells into the organic phase. In future experiments, the application of IPM in a repeated batch process will be tested, where IPM is added at the end of each batch run to remove triterpenoids from the cells, allowing the same biocatalysts to be used in several sequential batch steps. Due to its high biocompatibility, the amount of IPM added to the culture can also be increased to more than 20 vol % to extract more than 40 % triterpenoids in the organic phase, allowing the cells to produce more triterpenoids. This highlights the potential for the development of a continuous large-scale process, which allows biocatalysts to produce intracellular products continuously without the necessity of cell disruption and without limitation of the cell capacity.Keywords: betulinic acid, biocompatible solvent, in-situ extraction, isopropyl myristate, process development, secondary metabolites, triterpenoids, yeast
Procedia PDF Downloads 153511 Antimicrobial and Antibiofilm Properties of Fatty Acids Against Streptococcus Mutans
Authors: A. Mulry, C. Kealey, D. B. Brady
Abstract:
Planktonic bacteria can form biofilms which are microbial aggregates embedded within a matrix of extracellular polymeric substances (EPS). They can be found attached to abiotic or biotic surfaces. Biofilms are responsible for oral diseases such as dental caries, gingivitis and the progression of periodontal disease. Biofilms can resist 500 to 1000 times the concentration of biocides and antibiotics used to kill planktonic bacteria. Biofilm development on oral surfaces involves four stages, initial attachment, early development, maturation and dispersal of planktonic cells. The Minimum Inhibitory Concentration (MIC) was determined using a range of saturated and unsaturated fatty acids using the resazurin assay, followed by serial dilution and spot plating on BHI agar plates to establish the Minimum Bactericidal Concentration (MBC). Log reduction of bacteria was also evaluated for each fatty acid. The Minimum Biofilm Inhibition Concentration (MBIC) was determined using crystal violet assay in 96 well plates on forming and pre-formed S. mutans biofilms using BHI supplemented with 1% sucrose. Saturated medium-chain fatty acids Octanoic (C8.0), Decanoic (C10.0) and Undecanoic acid (C11.0) do not display strong antibiofilm properties; however, Lauric (C12.0) and Myristic (C14.0) display moderate antibiofilm properties with 97.83% and 97.5% biofilm inhibition with 1000 µM respectively. Monounsaturated, Oleic acid (C18.1) and polyunsaturated large chain fatty acids, Linoleic acid (C18.2) display potent antibiofilm properties with biofilm inhibition of 99.73% at 125 µM and 100% at 65.5 µM, respectively. Long-chain polyunsaturated Omega-3 fatty acids α-Linoleic (C18.3), Eicosapentaenoic Acid (EPA) (C20.5), Docosahexaenoic Acid (DHA) (C22.6) have displayed strong antibiofilm efficacy from concentrations ranging from 31.25-250µg/ml. DHA is the most promising antibiofilm agent with an MBIC of 99.73% with 15.625µg/ml. This may be due to the presence of six double bonds and the structural orientation of the fatty acid. To conclude, fatty acids displaying the most antimicrobial activity appear to be medium or long-chain unsaturated fatty acids containing one or more double bonds. Most promising agents include Omega-3-fatty acids Linoleic, α-Linoleic, EPA and DHA, as well as Omega-9 fatty acid Oleic acid. These results indicate that fatty acids have the potential to be used as antimicrobials and antibiofilm agents against S. mutans. Future work involves further screening of the most potent fatty acids against a range of bacteria, including Gram-positive and Gram-negative oral pathogens. Future work will involve incorporating the most effective fatty acids onto dental implant devices to prevent biofilm formation.Keywords: antibiofilm, biofilm, fatty acids, S. mutans
Procedia PDF Downloads 160510 Fostering Non-Traditional Student Success in an Online Music Appreciation Course
Authors: Linda Fellag, Arlene Caney
Abstract:
E-learning has earned an essential place in academia because it promotes learner autonomy, student engagement, and technological aptitude, and allows for flexible learning. However, despite advantages, educators have been slower to embrace e-learning for ESL and other non-traditional students for fear that such students will not succeed without the direct faculty contact and academic support of face-to-face classrooms. This study aims to determine if a non-traditional student-friendly online course can produce student retention and performance rates that compare favorably with those of students in standard online sections of the same course aimed at traditional college-level students. One Music faculty member is currently collaborating with an English instructor to redesign an online college-level Music Appreciation course for non-traditional college students. At Community College of Philadelphia, Introduction to Music Appreciation was recently designated as one of the few college-level courses that advanced ESL, and developmental English students can take while completing their language studies. Beginning in Fall 2017, the course will be critical for international students who must maintain full-time student status under visa requirements. In its current online format, however, Music Appreciation is designed for traditional college students, and faculty who teach these sections have been reluctant to revise the course to address the needs of non-traditional students. Interestingly, presenters maintain that the online platform is the ideal place to develop language and college readiness skills in at-risk students while maintaining the course's curricular integrity. The two faculty presenters describe how curriculum rather than technology drives the redesign of the digitized music course, and self-study materials, guided assignments, and periodic assessments promote independent learning and comprehension of material. The 'scaffolded' modules allow ESL and developmental English students to build on prior knowledge, preview key vocabulary, discuss content, and complete graded tasks that demonstrate comprehension. Activities and assignments, in turn, enhance college success by allowing students to practice academic reading strategies, writing, speaking, and student-faculty and peer-peer communication and collaboration. The course components facilitate a comparison of student performance and retention in sections of the redesigned and existing online sections of Music Appreciation as well as in previous sections with at-risk students. Indirect, qualitative measures include student attitudinal surveys and evaluations. Direct, quantitative measures include withdrawal rates, tests of disciplinary knowledge, and final grades. The study will compare the outcomes of three cohorts in the two versions of the online course: ESL students, at-risk developmental students, and college-level students. These data will also be compared with retention and student outcomes data of the three cohorts in f2f Music Appreciation, which permitted non-traditional student enrollment from 1998-2005. During this eight-year period, the presenter addressed the problems of at-risk students by adding language and college success support, which resulted in strong retention and outcomes. The presenters contend that the redesigned course will produce favorable outcomes among all three cohorts because it contains components which proved successful with at-risk learners in f2f sections of the course. Results of their study will be published in 2019 after the redesigned online course has met for two semesters.Keywords: college readiness, e-learning, music appreciation, online courses
Procedia PDF Downloads 176509 Using Chatbots to Create Situational Content for Coursework
Authors: B. Bricklin Zeff
Abstract:
This research explores the development and application of a specialized chatbot tailored for a nursing English course, with a primary objective of augmenting student engagement through situational content and responsiveness to key expressions and vocabulary. Introducing the chatbot, elucidating its purpose, and outlining its functionality are crucial initial steps in the research study, as they provide a comprehensive foundation for understanding the design and objectives of the specialized chatbot developed for the nursing English course. These elements establish the context for subsequent evaluations and analyses, enabling a nuanced exploration of the chatbot's impact on student engagement and language learning within the nursing education domain. The subsequent exploration of the intricate language model development process underscores the fusion of scientific methodologies and artistic considerations in this application of artificial intelligence (AI). Tailored for educators and curriculum developers in nursing, practical principles extending beyond AI and education are considered. Some insights into leveraging technology for enhanced language learning in specialized fields are addressed, with potential applications of similar chatbots in other professional English courses. The overarching vision is to illuminate how AI can transform language learning, rendering it more interactive and contextually relevant. The presented chatbot is a tangible example, equipping educators with a practical tool to enhance their teaching practices. Methodologies employed in this research encompass surveys and discussions to gather feedback on the chatbot's usability, effectiveness, and potential improvements. The chatbot system was integrated into a nursing English course, facilitating the collection of valuable feedback from participants. Significant findings from the study underscore the chatbot's effectiveness in encouraging more verbal practice of target expressions and vocabulary necessary for performance in role-play assessment strategies. This outcome emphasizes the practical implications of integrating AI into language education in specialized fields. This research holds significance for educators and curriculum developers in the nursing field, offering insights into integrating technology for enhanced English language learning. The study's major findings contribute valuable perspectives on the practical impact of the chatbot on student interaction and verbal practice. Ultimately, the research sheds light on the transformative potential of AI in making language learning more interactive and contextually relevant, particularly within specialized domains like nursing.Keywords: chatbot, nursing, pragmatics, role-play, AI
Procedia PDF Downloads 65508 Differentially Expressed Genes in Atopic Dermatitis: Bioinformatics Analysis Of Pooled Microarray Gene Expression Datasets In Gene Expression Omnibus
Abstract:
Background: Atopic dermatitis (AD) is a chronic and refractory inflammatory skin disease characterized by relapsing eczematous and pruritic skin lesions. The global prevalence of AD ranges from 1~ 20%, and its incidence rates are increasing. It affects individuals from infancy to adulthood, significantly impacting their daily lives and social activities. Despite its major health burden, the precise mechanisms underlying AD remain unknown. Understanding the genetic differences associated with AD is crucial for advancing diagnosis and targeted treatment development. This study aims to identify candidate genes of AD by using bioinformatics analysis. Methods: We conducted a comprehensive analysis of four pooled transcriptomic datasets (GSE16161, GSE32924, GSE130588, and GSE120721) obtained from the Gene Expression Omnibus (GEO) database. Differential gene expression analysis was performed using the R statistical language. The differentially expressed genes (DEGs) between AD patients and normal individuals were functionally analyzed using Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment. Furthermore, a protein-protein interaction (PPI) network was constructed to identify candidate genes. Results: Among the patient-level gene expression datasets, we identified 114 shared DEGs, consisting of 53 upregulated genes and 61 downregulated genes. Functional analysis using GO and KEGG revealed that the DEGs were mainly associated with the negative regulation of transcription from RNA polymerase II promoter, membrane-related functions, protein binding, and the Human papillomavirus infection pathway. Through the PPI network analysis, we identified eight core genes: CD44, STAT1, HMMR, AURKA, MKI67, and SMARCA4. Conclusion: This study elucidates key genes associated with AD, providing potential targets for diagnosis and treatment. The identified genes have the potential to contribute to the understanding and management of AD. The bioinformatics analysis conducted in this study offers new insights and directions for further research on AD. Future studies can focus on validating the functional roles of these genes and exploring their therapeutic potential in AD. While these findings will require further verification as achieved with experiments involving in vivo and in vitro models, these results provided some initial insights into dysfunctional inflammatory and immune responses associated with AD. Such information offers the potential to develop novel therapeutic targets for use in preventing and treating AD.Keywords: atopic dermatitis, bioinformatics, biomarkers, genes
Procedia PDF Downloads 82507 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute
Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández
Abstract:
The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge
Procedia PDF Downloads 225506 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60505 Mineralogical Study of the Triassic Clay of Maaziz and the Miocene Marl of Akrach in Morocco: Analysis and Evaluating of the Two Geomaterials for the Construction of Ceramic Bricks
Authors: Sahar El Kasmi, Ayoub Aziz, Saadia Lharti, Mohammed El Janati, Boubker Boukili, Nacer El Motawakil, Mayom Chol Luka Awan
Abstract:
Two types of geomaterials (Red Triassic clay from the Maaziz region and Yellow Pliocene clay from the Akrach region) were used to create different mixtures for the fabrication of ceramic bricks. This study investigated the influence of the Pliocene clay on the overall composition and mechanical properties of the Triassic clay. The red Triassic clay, sourced from Maaziz, underwent various mechanical processes and treatments to facilitate its transformation into ceramic bricks for construction. The triassic clay was subjected to a drying chamber and a heating chamber at 100°C to remove moisture. Subsequently, the dried clay samples were processed using a Planetary Babs ll Mill to reduce particle size and improve homogeneity. The resulting clay material was sieved, and the fine particles below 100 mm were collected for further analysis. In parallel, the Miocene marl obtained from the Akrach region was fragmented into finer particles and subjected to similar drying, grinding, and sieving procedures as the triassic clay. The two clay samples are then amalgamated and homogenized in different proportions. Precise measurements were taken using a weighing balance, and mixtures of 90%, 80%, and 70% Triassic clay with 10%, 20%, and 30% yellow clay were prepared, respectively. To evaluate the impact of Pliocene marl on the composition, the prepared clay mixtures were spread evenly and treated with a water modifier to enhance plasticity. The clay was then molded using a brick-making machine, and the initial manipulation process was observed. Additional batches were prepared with incremental amounts of Pliocene marl to further investigate its effect on the fracture behavior of the clay, specifically their resistance. The molded clay bricks were subjected to compression tests to measure their strength and resistance to deformation. Additional tests, such as water absorption tests, were also conducted to assess the overall performance of the ceramic bricks fabricated from the different clay mixtures. The results were analyzed to determine the influence of the Pliocene marl on the strength and durability of the Triassic clay bricks. The results indicated that the incorporation of Pliocene clay reduced the fracture of the triassic clay, with a noticeable reduction observed at 10% addition. No fractures were observed when 20% and 30% of yellow clay are added. These findings suggested that yellow clay can enhance the mechanical properties and structural integrity of red clay-based products.Keywords: triassic clay, pliocene clay, mineralogical composition, geo-materials, ceramics, akach region, maaziz region, morocco.
Procedia PDF Downloads 88504 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 138503 Analyzing the Impact of Bariatric Surgery in Obesity Associated Chronic Kidney Disease: A 2-Year Observational Study
Authors: Daniela Magalhaes, Jorge Pedro, Pedro Souteiro, Joao S. Neves, Sofia Castro-Oliveira, Vanessa Guerreiro, Rita Bettencourt- Silva, Maria M. Costa, Ana Varela, Joana Queiros, Paula Freitas, Davide Carvalho
Abstract:
Introduction: Obesity is an independent risk factor for renal dysfunction. Our aims were: (1) evaluate the impact of bariatric surgery (BS) on renal function; (2) clarify the factors determining the postoperative evolution of the glomerular filtration rate (GFR); (3) access the occurrence of oxalate-mediated renal complications. Methods: We investigated a cohort of 1448 obese patients who underwent bariatric surgery. Those with basal GFR (GFR0) < 30mL/min or without information about the GFR 2-year post-surgery (GFR2) were excluded. Results: We included 725 patients, of whom 647 (89.2%) women, with 41 (IQR 34-51) years, a median weight of 112.4 (IQR 103.0-125.0) kg and a median BMI of 43.4 (IQR 40.6-46.9) kg/m2. Of these, 459 (63.3%) performed gastric bypass (RYGB), 144 (19.9%) placed an adjustable gastric band (AGB) and 122 (16.8%) underwent vertical gastrectomy (VG). At 2-year post-surgery, excess weight loss (EWL) was 60.1 (IQR 43.7-72.4) %. There was a significant improve of metabolic and inflammatory status, as well as a significant decrease in the proportion of patients with diabetes, arterial hypertension and dyslipidemia (p < 0.0001). At baseline, 38 (5.2%) of subjects had hyperfiltration with a GFR0 ≥ 125mL/min/1.73m2, 492 (67.9%) had a GFR0 90-124 mL/min/1.73m2, 178 (24.6%) had a GFR0 60-89 mL/min/1.73m2, and 17 (2.3%) had a GFR0 < 60 mL/min/1.73m2. GFR decreased in 63.2% of patients with hyperfiltration (ΔGFR=-2.5±7.6), and increased in 96.6% (ΔGFR=22.2±12.0) and 82.4% (ΔGFR=24.3±30.0) of the subjects with GFR0 60-89 and < 60 mL/min/1.73m2, respectively ( p < 0.0001). This trend was maintained when adjustment was made for the type of surgery performed. Of 321 patients, 10 (3.3%) had a urinary albumin excretion (UAE) > 300 mg/dL (A3), 44 (14.6%) had a UAE 30-300 mg/dL (A2) and 247 (82.1%) has a UAE < 30 mg/dL (A1). Albuminuria decreased after surgery and at 2-year follow-up only 1 (0.3%) patient had A3, 17 (5.6%) had A2 and 283 (94%) had A1 (p < 0,0001). In multivariate analysis, the variables independently associated with ΔGFR were BMI (positively) and fasting plasma glucose (negatively). During the 2-year follow-up, only 57 of the 725 patients had transient urinary excretion of calcium oxalate crystals. None has records of oxalate-mediated renal complications at our center. Conclusions: The evolution of GFR after BS seems to depend on the initial renal function, as it decreases in subjects with hyperfiltration, but tends to increase in those with renal dysfunction. Our results suggest that BS is associated with improvement of renal outcomes, without significant increase of renal complications. So, apart the clear benefits in metabolic and inflammatory status, maybe obese adults with nondialysis-dependent CKD should be referred for bariatric surgery evaluation.Keywords: albuminuria, bariatric surgery, glomerular filtration rate, renal function
Procedia PDF Downloads 360502 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery
Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina
Abstract:
In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries
Procedia PDF Downloads 162501 Foundation Phase Teachers' Experiences of School Based Support Teams: A Case of Selected Schools in Johannesburg
Authors: Ambeck Celyne Tebid, Harry S. Rampa
Abstract:
The South African Education system recognises the need for all learners including those experiencing learning difficulties, to have access to a single unified system of education. For teachers to be pedagogically responsive to an increasingly diverse learner population without appropriate support has been proven to be unrealistic. As such, this has considerably hampered interest amongst teachers, especially those at the foundation phase to work within an Inclusive Education (IE) and training system. This qualitative study aimed at investigating foundation phase teachers’ experiences of school-based support teams (SBSTs) in two Full-Service (inclusive schools) and one Mainstream public primary school in the Gauteng province of South Africa; with particular emphasis on finding ways to supporting them, since teachers claimed they were not empowered in their initial training to teach learners experiencing learning difficulties. Hence, SBSTs were created at school levels to fill this gap thereby, supporting teaching and learning by identifying and addressing learners’, teachers’ and schools’ needs. With the notion that IE may be failing because of systemic reasons, this study uses Bronfenbrenner’s (1979) ecosystemic as well as Piaget’s (1980) maturational theory to examine the nature of support and experiences amongst teachers taking individual and systemic factors into consideration. Data was collected using in-depth, face-to-face interviews, document analysis and observation with 6 foundation phase teachers drawn from 3 different schools, 3 SBST coordinators, and 3 school principals. Data was analysed using the phenomenological data analysis method. Amongst the findings of the study is that South African full- service and mainstream schools have functional SBSTs which render formal and informal support to the teachers; this support varies in quality depending on the socio-economic status of the relevant community where the schools are situated. This paper, however, argues that what foundation phase teachers settled for as ‘support’ is flawed; as well as how they perceive the SBST and its role is problematic. The paper conclude by recommending that, the SBST should consider other approaches at foundation phase teacher support such as, empowering teachers with continuous practical experiences on how to deal with real classroom scenarios, as well as ensuring that all support, be it on academic or non-academic issues should be provided within a learning community framework where the teacher, family, SBST and where necessary, community organisations should harness their skills towards a common goal.Keywords: foundation phase, full- service schools, inclusive education, learning difficulties, school-based support teams, teacher support
Procedia PDF Downloads 236500 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation
Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu
Abstract:
Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing
Procedia PDF Downloads 161499 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 129498 Investigating the Essentiality of Oxazolidinones in Resistance-Proof Drug Combinations in Mycobacterium tuberculosis Selected under in vitro Conditions
Authors: Gail Louw, Helena Boshoff, Taeksun Song, Clifton Barry
Abstract:
Drug resistance in Mycobacterium tuberculosis is primarily attributed to mutations in target genes. These mutations incur a fitness cost and result in bacterial generations that are less fit, which subsequently acquire compensatory mutations to restore fitness. We hypothesize that mutations in specific drug target genes influence bacterial metabolism and cellular function, which affects its ability to develop subsequent resistance to additional agents. We aim to determine whether the sequential acquisition of drug resistance and specific mutations in a well-defined clinical M. tuberculosis strain promotes or limits the development of additional resistance. In vitro mutants resistant to pretomanid, linezolid, moxifloxacin, rifampicin and kanamycin were generated from a pan-susceptible clinical strain from the Beijing lineage. The resistant phenotypes to the anti-TB agents were confirmed by the broth microdilution assay and genetic mutations were identified by targeted gene sequencing. Growth of mono-resistant mutants was done in enriched medium for 14 days to assess in vitro fitness. Double resistant mutants were generated against anti-TB drug combinations at concentrations 5x and 10x the minimum inhibitory concentration. Subsequently, mutation frequencies for these anti-TB drugs in the different mono-resistant backgrounds were determined. The initial level of resistance and the mutation frequencies observed for the mono-resistant mutants were comparable to those previously reported. Targeted gene sequencing revealed the presence of known and clinically relevant mutations in the mutants resistant to linezolid, rifampicin, kanamycin and moxifloxacin. Significant growth defects were observed for mutants grown under in vitro conditions compared to the sensitive progenitor. Mutation frequencies determination in the mono-resistant mutants revealed a significant increase in mutation frequency against rifampicin and kanamycin, but a significant decrease in mutation frequency against linezolid and sutezolid. This suggests that these mono-resistant mutants are more prone to develop resistance to rifampicin and kanamycin, but less prone to develop resistance against linezolid and sutezolid. Even though kanamycin and linezolid both inhibit protein synthesis, these compounds target different subunits of the ribosome, thereby leading to different outcomes in terms of fitness in the mutants with impaired cellular function. These observations showed that oxazolidinone treatment is instrumental in limiting the development of multi-drug resistance in M. tuberculosis in vitro.Keywords: oxazolidinones, mutations, resistance, tuberculosis
Procedia PDF Downloads 163497 Evaluation of Invasive Tree Species for Production of Phosphate Bonded Composites
Authors: Stephen Osakue Amiandamhen, Schwaller Andreas, Martina Meincken, Luvuyo Tyhoda
Abstract:
Invasive alien tree species are currently being cleared in South Africa as a result of the forest and water imbalances. These species grow wildly constituting about 40% of total forest area. They compete with the ecosystem for natural resources and are considered as ecosystem engineers by rapidly changing disturbance regimes. As such, they are harvested for commercial uses but much of it is wasted because of their form and structure. The waste is being sold to local communities as fuel wood. These species can be considered as potential feedstock for the production of phosphate bonded composites. The presence of bark in wood-based composites leads to undesirable properties, and debarking as an option can be cost implicative. This study investigates the potentials of these invasive species processed without debarking on some fundamental properties of wood-based panels. Some invasive alien tree species were collected from EC Biomass, Port Elizabeth, South Africa. They include Acacia mearnsii (Black wattle), A. longifolia (Long-leaved wattle), A. cyclops (Red-eyed wattle), A. saligna (Golden-wreath wattle) and Eucalyptus globulus (Blue gum). The logs were chipped as received. The chips were hammer-milled and screened through a 1 mm sieve. The wood particles were conditioned and the quantity of bark in the wood was determined. The binding matrix was prepared using a reactive magnesia, phosphoric acid and class S fly ash. The materials were mixed and poured into a metallic mould. The composite within the mould was compressed at room temperature at a pressure of 200 KPa. After initial setting which took about 5 minutes, the composite board was demoulded and air-cured for 72 h. The cured product was thereafter conditioned at 20°C and 70% relative humidity for 48 h. Test of physical and strength properties were conducted on the composite boards. The effect of binder formulation and fly ash content on the properties of the boards was studied using fitted response surface technology, according to a central composite experimental design (CCD) at a fixed wood loading of 75% (w/w) of total inorganic contents. The results showed that phosphate/magnesia ratio of 3:1 and fly ash content of 10% was required to obtain a product of good properties and sufficient strength for intended applications. The proposed products can be used for ceilings, partitioning and insulating wall panels.Keywords: invasive alien tree species, phosphate bonded composites, physical properties, strength
Procedia PDF Downloads 295496 Superhydrophobic Materials: A Promising Way to Enhance Resilience of Electric System
Authors: M. Balordi, G. Santucci de Magistris, F. Pini, P. Marcacci
Abstract:
The increasing of extreme meteorological events represents the most important causes of damages and blackouts of the whole electric system. In particular, the icing on ground-wires and overheads lines, due to snowstorms or harsh winter conditions, very often gives rise to the collapse of cables and towers both in cold and warm climates. On the other hand, the high concentration of contaminants in the air, due to natural and/or antropic causes, is reflected in high levels of pollutants layered on glass and ceramic insulators, causing frequent and unpredictable flashover events. Overheads line and insulator failures lead to blackouts, dangerous and expensive maintenances and serious inefficiencies in the distribution service. Inducing superhydrophobic (SHP) properties to conductors, ground-wires and insulators, is one of the ways to face all these problems. Indeed, in some cases, the SHP surface can delay the ice nucleation time and decrease the ice nucleation temperature, preventing ice formation. Besides, thanks to the low surface energy, the adhesion force between ice and a superhydrophobic material are low and the ice can be easily detached from the surface. Moreover, it is well known that superhydrophobic surfaces can have self-cleaning properties: these hinder the deposition of pollution and decrease the probability of flashover phenomena. Here this study presents three different studies to impart superhydrophobicity to aluminum, zinc and glass specimens, which represent the main constituent materials of conductors, ground-wires and insulators, respectively. The route to impart the superhydrophobicity to the metallic surfaces can be summarized in a three-step process: 1) sandblasting treatment, 2) chemical-hydrothermal treatment and 3) coating deposition. The first step is required to create a micro-roughness. In the chemical-hydrothermal treatment a nano-scale metallic oxide (Al or Zn) is grown and, together with the sandblasting treatment, bring about a hierarchical micro-nano structure. By coating an alchilated or fluorinated siloxane coating, the surface energy decreases and gives rise to superhydrophobic surfaces. In order to functionalize the glass, different superhydrophobic powders, obtained by a sol-gel synthesis, were prepared. Further, the specimens were covered with a commercial primer and the powders were deposed on them. All the resulting metallic and glass surfaces showed a noticeable superhydrophobic behavior with a very high water contact angles (>150°) and a very low roll-off angles (<5°). The three optimized processes are fast, cheap and safe, and can be easily replicated on industrial scales. The anti-icing and self-cleaning properties of the surfaces were assessed with several indoor lab-tests that evidenced remarkable anti-icing properties and self-cleaning behavior with respect to the bare materials. Finally, to evaluate the anti-snow properties of the samples, some SHP specimens were exposed under real snow-fall events in the RSE outdoor test-facility located in Vinadio, western Alps: the coated samples delay the formation of the snow-sleeves and facilitate the detachment of the snow. The good results for both indoor and outdoor tests make these materials promising for further development in large scale applications.Keywords: superhydrophobic coatings, anti-icing, self-cleaning, anti-snow, overheads lines
Procedia PDF Downloads 183495 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios
Authors: Ahmed Mantawy, James C. Anderson
Abstract:
Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation
Procedia PDF Downloads 280494 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics
Authors: M. Khorshed Alam, H. Takaba
Abstract:
The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo
Procedia PDF Downloads 192493 Stent Surface Functionalisation via Plasma Treatment to Promote Fast Endothelialisation
Authors: Irene Carmagnola, Valeria Chiono, Sandra Pacharra, Jochen Salber, Sean McMahon, Chris Lovell, Pooja Basnett, Barbara Lukasiewicz, Ipsita Roy, Xiang Zhang, Gianluca Ciardelli
Abstract:
Thrombosis and restenosis after stenting procedure can be prevented by promoting fast stent wall endothelialisation. It is well known that surface functionalisation with antifouling molecules combining with extracellular matrix proteins is a promising strategy to design biomimetic surfaces able to promote fast endothelialization. In particular, REDV has gained much attention for the ability to enhance rapid endothelialization due to its specific affinity with endothelial cells (ECs). In this work, a two-step plasma treatment was performed to polymerize a thin layer of acrylic acid, used to subsequently graft PEGylated-REDV and polyethylene glycol (PEG) at different molar ratio with the aim to selectively promote endothelial cell adhesion avoiding platelet activation. PEGylate-REDV was provided by Biomatik and it is formed by 6 PEG monomer repetitions (Chempep Inc.), with an NH2 terminal group. PEG polymers were purchased from Chempep Inc. with two different chain lengths: m-PEG6-NH2 (295.4 Da) with 6 monomer repetitions and m-PEG12-NH2 (559.7 Da) with 12 monomer repetitions. Plasma activation was obtained by operating at 50W power, 5 min of treatment and at an Ar flow rate of 20 sccm. Pure acrylic acid (99%, AAc) vapors were diluted in Ar (flow = 20 sccm) and polymerized by a pulsed plasma discharge applying a discharge RF power of 200 W, a duty cycle of 10% (on time = 10 ms, off time = 90 ms) for 10 min. After plasma treatment, samples were dipped into an 1-(3-dimethylaminopropyl)-3- ethylcarbodiimide (EDC)/N-hydroxysuccinimide (NHS) solution (ratio 4:1, pH 5.5) for 1 h at 4°C and subsequently dipped in PEGylate-REDV and PEGylate-REDV:PEG solutions at different molar ratio (100 μg/mL in PBS) for 20 h at room temperature. Surface modification was characterized through physico-chemical analyses and in vitro cell tests. PEGylated-REDV peptide and PEG were successfully bound to the carboxylic groups that are formed on the polymer surface after plasma reaction. FTIR-ATR spectroscopy, X -ray Photoelectron Spectroscopy (XPS) and contact angle measurement gave a clear indication of the presence of the grafted molecules. The use of PEG as a spacer allowed for an increase in wettability of the surface, and the effect was more evident by increasing the amount of PEG. Endothelial cells adhered and spread well on the surfaces functionalized with the REDV sequence. In conclusion, a selective coating able to promote a new endothelial cell layer on polymeric stent surface was developed. In particular, a thin AAc film was polymerised on the polymeric surface in order to expose –COOH groups, and PEGylate-REDV and PEG were successful grafted on the polymeric substrates. The REDV peptide demonstrated to encourage cell adhesion with a consequent, expected improvement of the hemocompatibility of these polymeric surfaces in vivo. Acknowledgements— This work was funded by the European Commission 7th Framework Programme under grant agreement number 604251- ReBioStent (Reinforced Bioresorbable Biomaterials for Therapeutic Drug Eluting Stents). The authors thank all the ReBioStent partners for their support in this work.Keywords: endothelialisation, plasma treatment, stent, surface functionalisation
Procedia PDF Downloads 312492 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 172491 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy
Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani
Abstract:
Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis
Procedia PDF Downloads 151490 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel
Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn
Abstract:
Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method
Procedia PDF Downloads 481489 A Systematic Review of Efficacy and Safety of Radiofrequency Ablation in Patients with Spinal Metastases
Authors: Pascale Brasseur, Binu Gurung, Nicholas Halfpenny, James Eaton
Abstract:
Development of minimally invasive treatments in recent years provides a potential alternative to invasive surgical interventions which are of limited value to patients with spinal metastases due to short life expectancy. A systematic review was conducted to explore the efficacy and safety of radiofrequency ablation (RFA), a minimally invasive treatment in patients with spinal metastases. EMBASE, Medline and CENTRAL were searched from database inception to March 2017 for randomised controlled trials (RCTs) and non-randomised studies. Conference proceedings for ASCO and ESMO published in 2015 and 2016 were also searched. Fourteen studies were included: three prospective interventional studies, four prospective case series and seven retrospective case series. No RCTs or studies comparing RFA with another treatment were identified. RFA was followed by cement augmentation in all patients in seven studies and some patients (40-96%) in the remaining seven studies. Efficacy was assessed as pain relief in 13/14 studies with the use of a numerical rating scale (NRS) or a visual analogue scale (VAS) at various time points. Ten of the 13 studies reported a significant decrease in pain outcome, post-RFA compared to baseline. NRS scores improved significantly at 1 week (5.9 to 3.5, p < 0.0001; 8 to 4.3, p < 0.02 and 8 to 3.9, p < 0.0001) and this improvement was maintained at 1 month post-RFA compared to baseline (5.9 to 2.6, p < 0.0001; 8 to 2.9, p < 0.0003; 8 to 2.9, p < 0.0001). Similarly, VAS scores decreased significantly at 1 week (7.5 to 2.7, p=0.00005; 7.51 to 1.73, p < 0.0001; 7.82 to 2.82, p < 0.001) and this pattern was maintained at 1 month post-RFA compared to baseline (7.51 to 2.25, p < 0.0001; 7.82 to 3.3; p < 0.001). A significant pain relief was achieved regardless of whether patients had cement augmentation in two studies assessing the impact of RFA with or without cement augmentation on VAS pain scores. In these two studies, a significant decrease in pain scores was reported for patients receiving RFA alone and RFA+cement at 1 week (4.3 to 1.7. p=0.0004 and 6.6 to 1.7, p=0.003 respectively) and 15-36 months (7.9 to 4, p=0.008 and 7.6 to 3.5, p=0.005 respectively) after therapy. Few minor complications were reported and these included neural damage, radicular pain, vertebroplasty leakage and lower limb pain/numbness. In conclusion, the efficacy and safety of RFA were consistently positive between prospective and retrospective studies with reductions in pain and few procedural complications. However, the lack of control groups in the identified studies indicates the possibility of selection bias inherent in single arm studies. Controlled trials exploring efficacy and safety of RFA in patients with spinal metastases are warranted to provide robust evidence. The identified studies provide an initial foundation for such future trials.Keywords: pain relief, radiofrequency ablation, spinal metastases, systematic review
Procedia PDF Downloads 173488 Design of Experiment for Optimizing Immunoassay Microarray Printing
Authors: Alex J. Summers, Jasmine P. Devadhasan, Douglas Montgomery, Brittany Fischer, Jian Gu, Frederic Zenhausern
Abstract:
Immunoassays have been utilized for several applications, including the detection of pathogens. Our laboratory is in the development of a tier 1 biothreat panel utilizing Vertical Flow Assay (VFA) technology for simultaneous detection of pathogens and toxins. One method of manufacturing VFA membranes is with non-contact piezoelectric dispensing, which provides advantages, such as low-volume and rapid dispensing without compromising the structural integrity of antibody or substrate. Challenges of this processinclude premature discontinuation of dispensing and misaligned spotting. Preliminary data revealed the Yp 11C7 mAb (11C7)reagent to exhibit a large angle of failure during printing which may have contributed to variable printing outputs. A Design of Experiment (DOE) was executed using this reagent to investigate the effects of hydrostatic pressure and reagent concentration on microarray printing outputs. A Nano-plotter 2.1 (GeSIM, Germany) was used for printing antibody reagents ontonitrocellulose membrane sheets in a clean room environment. A spotting plan was executed using Spot-Front-End software to dispense volumes of 11C7 reagent (20-50 droplets; 1.5-5 mg/mL) in a 6-test spot array at 50 target membrane locations. Hydrostatic pressure was controlled by raising the Pressure Compensation Vessel (PCV) above or lowering it below our current working level. It was hypothesized that raising or lowering the PCV 6 inches would be sufficient to cause either liquid accumulation at the tip or discontinue droplet formation. After aspirating 11C7 reagent, we tested this hypothesis under stroboscope.75% of the effective raised PCV height and of our hypothesized lowered PCV height were used. Humidity (55%) was maintained using an Airwin BO-CT1 humidifier. The number and quality of membranes was assessed after staining printed membranes with dye. The droplet angle of failure was recorded before and after printing to determine a “stroboscope score” for each run. The DOE set was analyzed using JMP software. Hydrostatic pressure and reagent concentration had a significant effect on the number of membranes output. As hydrostatic pressure was increased by raising the PCV 3.75 inches or decreased by lowering the PCV -4.5 inches, membrane output decreased. However, with the hydrostatic pressure closest to equilibrium, our current working level, membrane output, reached the 50-membrane target. As the reagent concentration increased from 1.5 to 5 mg/mL, the membrane output also increased. Reagent concentration likely effected the number of membrane output due to the associated dispensing volume needed to saturate the membranes. However, only hydrostatic pressure had a significant effect on stroboscope score, which could be due to discontinuation of dispensing, and thus the stroboscope check could not find a droplet to record. Our JMP predictive model had a high degree of agreement with our observed results. The JMP model predicted that dispensing the highest concentration of 11C7 at our current PCV working level would yield the highest number of quality membranes, which correlated with our results. Acknowledgements: This work was supported by the Chemical Biological Technologies Directorate (Contract # HDTRA1-16-C-0026) and the Advanced Technology International (Contract # MCDC-18-04-09-002) from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).Keywords: immunoassay, microarray, design of experiment, piezoelectric dispensing
Procedia PDF Downloads 182487 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery
Authors: Marlin Mubarak
Abstract:
Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.
Procedia PDF Downloads 353486 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement
Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes
Abstract:
Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology
Procedia PDF Downloads 80485 Antimicrobial Activity of 2-Nitro-1-Propanol and Lauric Acid against Gram-Positive Bacteria
Authors: Robin Anderson, Elizabeth Latham, David Nisbet
Abstract:
Propagation and dissemination of antimicrobial resistant and pathogenic microbes from spoiled silages and composts represents a serious public health threat to humans and animals. In the present study, the antimicrobial activity of the short chain nitro-compound, 2-nitro-1-propanol (9 mM) as well as the medium chain fatty acid, lauric acid, and its glycerol monoester, monolaurin, (each at 25 and 17 µmol/mL, respectfully) were investigated against select pathogenic and multi-drug resistant antimicrobial resistant Gram-positive bacteria common to spoiled silages and composts. In an initial study, we found that growth rates of a multi-resistant Enterococcus faecalis (expressing resistance against erythromycin, quinupristin/dalfopristin and tetracycline) and Staphylococcus aureus strain 12600 (expressing resistance against erythromycin, linezolid, penicillin, quinupristin/dalfopristin and vancomycin) were more than 78% slower (P < 0.05) by 2-nitro-1-propanol treatment during culture (n = 3/treatment) in anaerobically prepared ½ strength Brain Heart Infusion broth at 37oC when compared to untreated controls (0.332 ± 0.04 and 0.108 ± 0.03 h-1, respectively). The growth rate of 2-nitro-1-propanol-treated Listeria monocytogenes was also decreased by 96% (P < 0.05) when compared to untreated controls cultured similarly (0.171 ± 0.01 h-1). Maximum optical densities measured at 600 nm were lower (P < 0.05) in 2-nitro-1-propanol-treated cultures (0.053 ± 0.01, 0.205 ± 0.02 and 0.041 ± 0.01, respectively) than in untreated controls (0.483 ± 0.02, 0.523 ± 0.01 and 0.427 ± 0.01, respectively) for E. faecalis, S. aureus and L. monocytogenes, respectively. When tested against mixed microbial populations during anaerobic 24 h incubation of spoiled silage, significant effects of treatment with 1 mg 2-nitro-1-propanol (approximately 9.5 µmol/g) or 5 mg lauric acid/g (approximately 25 µmol/g) on populations of wildtype Enterococcus and Listeria were not observed. Mixed populations treated with 5 mg monolaurin/g (approximately 17 µmol/g) had lower (P < 0.05) viable cell counts of wildtype enterococci than untreated controls after 6 h incubation (2.87 ± 1.03 versus 5.20 ± 0.25 log10 colony forming units/g, respectively) but otherwise significant effects of monolaurin were not observed. These results reveal differential susceptibility of multi-drug resistant enterococci and staphylococci as well as L. monocytogenes to the inhibitory activity of 2-nitro-1-propanol and the medium chain fatty acid, lauric acid and its glycerol monoester, monolaurin. Ultimately, these results may lead to improved treatment technologies to preserve the microbiological safety of silages and composts.Keywords: 2-nitro-1-propanol, lauric acid, monolaurin, gram positive bacteria
Procedia PDF Downloads 109